Running Suite: Kubernetes e2e suite =================================== Random Seed: 1620144225 - Will randomize all specs Will run 5484 specs Running in parallel across 10 nodes May 4 16:03:47.327: INFO: >>> kubeConfig: /root/.kube/config May 4 16:03:47.331: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 4 16:03:47.364: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 4 16:03:47.425: INFO: The status of Pod cmk-init-discover-node1-m8vvw is Succeeded, skipping waiting May 4 16:03:47.425: INFO: The status of Pod cmk-init-discover-node2-zlxzj is Succeeded, skipping waiting May 4 16:03:47.425: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 4 16:03:47.425: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 4 16:03:47.425: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 4 16:03:47.443: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) May 4 16:03:47.443: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) May 4 16:03:47.443: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) May 4 16:03:47.443: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) May 4 16:03:47.443: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) May 4 16:03:47.443: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) May 4 16:03:47.443: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) May 4 16:03:47.443: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 4 16:03:47.443: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) May 4 16:03:47.443: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) May 4 16:03:47.443: INFO: e2e test version: v1.19.10 May 4 16:03:47.444: INFO: kube-apiserver version: v1.19.8 May 4 16:03:47.444: INFO: >>> kubeConfig: /root/.kube/config May 4 16:03:47.450: INFO: Cluster IP family: ipv4 SSSSSSS ------------------------------ May 4 16:03:47.447: INFO: >>> kubeConfig: /root/.kube/config May 4 16:03:47.467: INFO: Cluster IP family: ipv4 SSSSSSS ------------------------------ May 4 16:03:47.453: INFO: >>> kubeConfig: /root/.kube/config May 4 16:03:47.474: INFO: Cluster IP family: ipv4 S ------------------------------ May 4 16:03:47.456: INFO: >>> kubeConfig: /root/.kube/config May 4 16:03:47.476: INFO: Cluster IP family: ipv4 SSSS ------------------------------ May 4 16:03:47.458: INFO: >>> kubeConfig: /root/.kube/config May 4 16:03:47.478: INFO: Cluster IP family: ipv4 SSSSSSSSSSSS ------------------------------ May 4 16:03:47.467: INFO: >>> kubeConfig: /root/.kube/config May 4 16:03:47.487: INFO: Cluster IP family: ipv4 S ------------------------------ May 4 16:03:47.468: INFO: >>> kubeConfig: /root/.kube/config May 4 16:03:47.488: INFO: Cluster IP family: ipv4 SSSSS ------------------------------ May 4 16:03:47.468: INFO: >>> kubeConfig: /root/.kube/config May 4 16:03:47.490: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ May 4 16:03:47.491: INFO: >>> kubeConfig: /root/.kube/config May 4 16:03:47.512: INFO: Cluster IP family: ipv4 SS ------------------------------ May 4 16:03:47.488: INFO: >>> kubeConfig: /root/.kube/config May 4 16:03:47.514: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:03:47.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress May 4 16:03:47.572: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 4 16:03:47.574: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching May 4 16:03:47.596: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching May 4 16:03:47.598: INFO: starting watch STEP: patching STEP: updating May 4 16:03:47.608: INFO: waiting for watch events with expected annotations May 4 16:03:47.608: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:03:47.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-163" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":1,"skipped":25,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:03:47.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl May 4 16:03:47.568: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 4 16:03:47.569: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:03:47.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8883 version' May 4 16:03:47.694: INFO: stderr: "" May 4 16:03:47.694: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19\", GitVersion:\"v1.19.10\", GitCommit:\"98d5dc5d36d34a7ee13368a7893dcb400ec4e566\", GitTreeState:\"clean\", BuildDate:\"2021-04-15T03:28:42Z\", GoVersion:\"go1.15.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"19\", GitVersion:\"v1.19.8\", GitCommit:\"fd5d41537aee486160ad9b5356a9d82363273721\", GitTreeState:\"clean\", BuildDate:\"2021-02-17T12:33:08Z\", GoVersion:\"go1.15.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:03:47.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8883" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":-1,"completed":1,"skipped":37,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:03:47.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap May 4 16:03:47.566: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 4 16:03:47.568: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-ad2ddfd0-7c0c-40a5-9633-56547187e441 STEP: Creating a pod to test consume configMaps May 4 16:03:47.593: INFO: Waiting up to 5m0s for pod "pod-configmaps-3aeb9e95-a16a-4baa-9127-3943c7a73255" in namespace "configmap-1109" to be "Succeeded or Failed" May 4 16:03:47.595: INFO: Pod "pod-configmaps-3aeb9e95-a16a-4baa-9127-3943c7a73255": Phase="Pending", Reason="", readiness=false. Elapsed: 2.512173ms May 4 16:03:49.599: INFO: Pod "pod-configmaps-3aeb9e95-a16a-4baa-9127-3943c7a73255": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006098549s May 4 16:03:51.602: INFO: Pod "pod-configmaps-3aeb9e95-a16a-4baa-9127-3943c7a73255": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008739965s STEP: Saw pod success May 4 16:03:51.602: INFO: Pod "pod-configmaps-3aeb9e95-a16a-4baa-9127-3943c7a73255" satisfied condition "Succeeded or Failed" May 4 16:03:51.604: INFO: Trying to get logs from node node1 pod pod-configmaps-3aeb9e95-a16a-4baa-9127-3943c7a73255 container configmap-volume-test: STEP: delete the pod May 4 16:03:51.617: INFO: Waiting for pod pod-configmaps-3aeb9e95-a16a-4baa-9127-3943c7a73255 to disappear May 4 16:03:51.619: INFO: Pod pod-configmaps-3aeb9e95-a16a-4baa-9127-3943c7a73255 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:03:51.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1109" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":11,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:03:47.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected May 4 16:03:47.551: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 4 16:03:47.553: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-4a38fb3c-2b33-4b21-b88f-bac161a7fad1 STEP: Creating a pod to test consume configMaps May 4 16:03:47.572: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-010093bd-68df-4fa4-98c6-ba31b411eaae" in namespace "projected-1227" to be "Succeeded or Failed" May 4 16:03:47.574: INFO: Pod "pod-projected-configmaps-010093bd-68df-4fa4-98c6-ba31b411eaae": Phase="Pending", Reason="", readiness=false. Elapsed: 1.991611ms May 4 16:03:49.577: INFO: Pod "pod-projected-configmaps-010093bd-68df-4fa4-98c6-ba31b411eaae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005129996s May 4 16:03:51.580: INFO: Pod "pod-projected-configmaps-010093bd-68df-4fa4-98c6-ba31b411eaae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008158423s STEP: Saw pod success May 4 16:03:51.580: INFO: Pod "pod-projected-configmaps-010093bd-68df-4fa4-98c6-ba31b411eaae" satisfied condition "Succeeded or Failed" May 4 16:03:51.583: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-010093bd-68df-4fa4-98c6-ba31b411eaae container projected-configmap-volume-test: STEP: delete the pod May 4 16:03:51.693: INFO: Waiting for pod pod-projected-configmaps-010093bd-68df-4fa4-98c6-ba31b411eaae to disappear May 4 16:03:51.695: INFO: Pod pod-projected-configmaps-010093bd-68df-4fa4-98c6-ba31b411eaae no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:03:51.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1227" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":12,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:03:47.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api May 4 16:03:47.505: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 4 16:03:47.511: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 4 16:03:52.059: INFO: Successfully updated pod "annotationupdate64ca9e85-6557-4298-b559-0954ffccad1a" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:03:54.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3476" for this suite. • [SLOW TEST:6.594 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:03:47.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected May 4 16:03:47.517: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 4 16:03:47.522: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-b3229c6a-49a9-40fe-a7ba-42a55ae42c8e STEP: Creating a pod to test consume secrets May 4 16:03:47.549: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-19c0709a-aef5-4add-adcf-b661bb2ae67b" in namespace "projected-6467" to be "Succeeded or Failed" May 4 16:03:47.552: INFO: Pod "pod-projected-secrets-19c0709a-aef5-4add-adcf-b661bb2ae67b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.836295ms May 4 16:03:49.555: INFO: Pod "pod-projected-secrets-19c0709a-aef5-4add-adcf-b661bb2ae67b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005328252s May 4 16:03:51.558: INFO: Pod "pod-projected-secrets-19c0709a-aef5-4add-adcf-b661bb2ae67b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008385079s May 4 16:03:53.561: INFO: Pod "pod-projected-secrets-19c0709a-aef5-4add-adcf-b661bb2ae67b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011140441s May 4 16:03:55.564: INFO: Pod "pod-projected-secrets-19c0709a-aef5-4add-adcf-b661bb2ae67b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.014275748s STEP: Saw pod success May 4 16:03:55.564: INFO: Pod "pod-projected-secrets-19c0709a-aef5-4add-adcf-b661bb2ae67b" satisfied condition "Succeeded or Failed" May 4 16:03:55.566: INFO: Trying to get logs from node node2 pod pod-projected-secrets-19c0709a-aef5-4add-adcf-b661bb2ae67b container projected-secret-volume-test: STEP: delete the pod May 4 16:03:55.579: INFO: Waiting for pod pod-projected-secrets-19c0709a-aef5-4add-adcf-b661bb2ae67b to disappear May 4 16:03:55.581: INFO: Pod pod-projected-secrets-19c0709a-aef5-4add-adcf-b661bb2ae67b no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:03:55.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6467" for this suite. • [SLOW TEST:8.089 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:03:47.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir May 4 16:03:47.557: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 4 16:03:47.559: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium May 4 16:03:47.579: INFO: Waiting up to 5m0s for pod "pod-f68b2363-733c-4f56-9898-de67d4497940" in namespace "emptydir-6485" to be "Succeeded or Failed" May 4 16:03:47.581: INFO: Pod "pod-f68b2363-733c-4f56-9898-de67d4497940": Phase="Pending", Reason="", readiness=false. Elapsed: 2.240617ms May 4 16:03:49.584: INFO: Pod "pod-f68b2363-733c-4f56-9898-de67d4497940": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005806736s May 4 16:03:51.587: INFO: Pod "pod-f68b2363-733c-4f56-9898-de67d4497940": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008062259s May 4 16:03:53.589: INFO: Pod "pod-f68b2363-733c-4f56-9898-de67d4497940": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010782083s May 4 16:03:55.592: INFO: Pod "pod-f68b2363-733c-4f56-9898-de67d4497940": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.013201372s STEP: Saw pod success May 4 16:03:55.592: INFO: Pod "pod-f68b2363-733c-4f56-9898-de67d4497940" satisfied condition "Succeeded or Failed" May 4 16:03:55.594: INFO: Trying to get logs from node node2 pod pod-f68b2363-733c-4f56-9898-de67d4497940 container test-container: STEP: delete the pod May 4 16:03:55.607: INFO: Waiting for pod pod-f68b2363-733c-4f56-9898-de67d4497940 to disappear May 4 16:03:55.610: INFO: Pod pod-f68b2363-733c-4f56-9898-de67d4497940 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:03:55.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6485" for this suite. • [SLOW TEST:8.093 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":12,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:03:47.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected May 4 16:03:47.554: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 4 16:03:47.556: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-a0ff08a3-1521-4e65-a8f4-29c0d9ec20c5 STEP: Creating a pod to test consume configMaps May 4 16:03:47.582: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-12cd27f2-6b49-47e7-a195-746b7d451a6a" in namespace "projected-3654" to be "Succeeded or Failed" May 4 16:03:47.585: INFO: Pod "pod-projected-configmaps-12cd27f2-6b49-47e7-a195-746b7d451a6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.500914ms May 4 16:03:49.588: INFO: Pod "pod-projected-configmaps-12cd27f2-6b49-47e7-a195-746b7d451a6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005426299s May 4 16:03:51.591: INFO: Pod "pod-projected-configmaps-12cd27f2-6b49-47e7-a195-746b7d451a6a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00836441s May 4 16:03:53.593: INFO: Pod "pod-projected-configmaps-12cd27f2-6b49-47e7-a195-746b7d451a6a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010959513s May 4 16:03:55.596: INFO: Pod "pod-projected-configmaps-12cd27f2-6b49-47e7-a195-746b7d451a6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.013958576s STEP: Saw pod success May 4 16:03:55.596: INFO: Pod "pod-projected-configmaps-12cd27f2-6b49-47e7-a195-746b7d451a6a" satisfied condition "Succeeded or Failed" May 4 16:03:55.598: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-12cd27f2-6b49-47e7-a195-746b7d451a6a container projected-configmap-volume-test: STEP: delete the pod May 4 16:03:55.613: INFO: Waiting for pod pod-projected-configmaps-12cd27f2-6b49-47e7-a195-746b7d451a6a to disappear May 4 16:03:55.614: INFO: Pod pod-projected-configmaps-12cd27f2-6b49-47e7-a195-746b7d451a6a no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:03:55.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3654" for this suite. • [SLOW TEST:8.099 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:03:47.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api May 4 16:03:47.563: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 4 16:03:47.564: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 4 16:03:47.591: INFO: Waiting up to 5m0s for pod "downward-api-565f12be-e5b0-4086-a762-f6cc9fcd66ef" in namespace "downward-api-4599" to be "Succeeded or Failed" May 4 16:03:47.593: INFO: Pod "downward-api-565f12be-e5b0-4086-a762-f6cc9fcd66ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008913ms May 4 16:03:49.597: INFO: Pod "downward-api-565f12be-e5b0-4086-a762-f6cc9fcd66ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00543827s May 4 16:03:51.601: INFO: Pod "downward-api-565f12be-e5b0-4086-a762-f6cc9fcd66ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010087992s May 4 16:03:53.604: INFO: Pod "downward-api-565f12be-e5b0-4086-a762-f6cc9fcd66ef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012587858s May 4 16:03:55.608: INFO: Pod "downward-api-565f12be-e5b0-4086-a762-f6cc9fcd66ef": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016438593s May 4 16:03:57.611: INFO: Pod "downward-api-565f12be-e5b0-4086-a762-f6cc9fcd66ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.019489337s STEP: Saw pod success May 4 16:03:57.611: INFO: Pod "downward-api-565f12be-e5b0-4086-a762-f6cc9fcd66ef" satisfied condition "Succeeded or Failed" May 4 16:03:57.613: INFO: Trying to get logs from node node2 pod downward-api-565f12be-e5b0-4086-a762-f6cc9fcd66ef container dapi-container: STEP: delete the pod May 4 16:03:57.863: INFO: Waiting for pod downward-api-565f12be-e5b0-4086-a762-f6cc9fcd66ef to disappear May 4 16:03:57.865: INFO: Pod downward-api-565f12be-e5b0-4086-a762-f6cc9fcd66ef no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:03:57.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4599" for this suite. • [SLOW TEST:10.335 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":17,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:03:51.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 4 16:03:56.382: INFO: Successfully updated pod "labelsupdate08f4fa40-fc10-43d0-b145-5fefe1215c29" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:03:58.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3387" for this suite. • [SLOW TEST:6.766 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":43,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:03:58.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:03:58.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1618" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":3,"skipped":49,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:03:47.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:03:47.710: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 4 16:03:55.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-74 --namespace=crd-publish-openapi-74 create -f -' May 4 16:03:55.900: INFO: stderr: "" May 4 16:03:55.900: INFO: stdout: "e2e-test-crd-publish-openapi-118-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 4 16:03:55.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-74 --namespace=crd-publish-openapi-74 delete e2e-test-crd-publish-openapi-118-crds test-cr' May 4 16:03:56.069: INFO: stderr: "" May 4 16:03:56.069: INFO: stdout: "e2e-test-crd-publish-openapi-118-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 4 16:03:56.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-74 --namespace=crd-publish-openapi-74 apply -f -' May 4 16:03:56.309: INFO: stderr: "" May 4 16:03:56.309: INFO: stdout: "e2e-test-crd-publish-openapi-118-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 4 16:03:56.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-74 --namespace=crd-publish-openapi-74 delete e2e-test-crd-publish-openapi-118-crds test-cr' May 4 16:03:56.466: INFO: stderr: "" May 4 16:03:56.466: INFO: stdout: "e2e-test-crd-publish-openapi-118-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 4 16:03:56.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-74 explain e2e-test-crd-publish-openapi-118-crds' May 4 16:03:56.731: INFO: stderr: "" May 4 16:03:56.731: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-118-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:03:59.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-74" for this suite. • [SLOW TEST:11.942 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":2,"skipped":46,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:03:55.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 4 16:03:55.636: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d2b3261b-c0bd-4438-8774-77bcee821f1d" in namespace "projected-8531" to be "Succeeded or Failed" May 4 16:03:55.638: INFO: Pod "downwardapi-volume-d2b3261b-c0bd-4438-8774-77bcee821f1d": Phase="Pending", Reason="", readiness=false. Elapsed: 1.922871ms May 4 16:03:57.641: INFO: Pod "downwardapi-volume-d2b3261b-c0bd-4438-8774-77bcee821f1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004813227s May 4 16:03:59.644: INFO: Pod "downwardapi-volume-d2b3261b-c0bd-4438-8774-77bcee821f1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007503819s STEP: Saw pod success May 4 16:03:59.644: INFO: Pod "downwardapi-volume-d2b3261b-c0bd-4438-8774-77bcee821f1d" satisfied condition "Succeeded or Failed" May 4 16:03:59.646: INFO: Trying to get logs from node node1 pod downwardapi-volume-d2b3261b-c0bd-4438-8774-77bcee821f1d container client-container: STEP: delete the pod May 4 16:03:59.657: INFO: Waiting for pod downwardapi-volume-d2b3261b-c0bd-4438-8774-77bcee821f1d to disappear May 4 16:03:59.659: INFO: Pod downwardapi-volume-d2b3261b-c0bd-4438-8774-77bcee821f1d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:03:59.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8531" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:03:47.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi May 4 16:03:47.528: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 4 16:03:47.530: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:03:47.534: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 4 16:03:55.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1770 --namespace=crd-publish-openapi-1770 create -f -' May 4 16:03:55.849: INFO: stderr: "" May 4 16:03:55.849: INFO: stdout: "e2e-test-crd-publish-openapi-257-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 4 16:03:55.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1770 --namespace=crd-publish-openapi-1770 delete e2e-test-crd-publish-openapi-257-crds test-cr' May 4 16:03:56.010: INFO: stderr: "" May 4 16:03:56.010: INFO: stdout: "e2e-test-crd-publish-openapi-257-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 4 16:03:56.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1770 --namespace=crd-publish-openapi-1770 apply -f -' May 4 16:03:56.257: INFO: stderr: "" May 4 16:03:56.257: INFO: stdout: "e2e-test-crd-publish-openapi-257-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 4 16:03:56.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1770 --namespace=crd-publish-openapi-1770 delete e2e-test-crd-publish-openapi-257-crds test-cr' May 4 16:03:56.400: INFO: stderr: "" May 4 16:03:56.400: INFO: stdout: "e2e-test-crd-publish-openapi-257-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 4 16:03:56.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1770 explain e2e-test-crd-publish-openapi-257-crds' May 4 16:03:56.665: INFO: stderr: "" May 4 16:03:56.665: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-257-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:03:59.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1770" for this suite. • [SLOW TEST:12.164 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:03:55.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 4 16:03:55.717: INFO: Waiting up to 5m0s for pod "downwardapi-volume-82a77a3c-7717-4fc6-9728-14734b3c7268" in namespace "downward-api-9590" to be "Succeeded or Failed" May 4 16:03:55.719: INFO: Pod "downwardapi-volume-82a77a3c-7717-4fc6-9728-14734b3c7268": Phase="Pending", Reason="", readiness=false. Elapsed: 1.875648ms May 4 16:03:57.722: INFO: Pod "downwardapi-volume-82a77a3c-7717-4fc6-9728-14734b3c7268": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004728022s May 4 16:03:59.726: INFO: Pod "downwardapi-volume-82a77a3c-7717-4fc6-9728-14734b3c7268": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009228903s May 4 16:04:01.729: INFO: Pod "downwardapi-volume-82a77a3c-7717-4fc6-9728-14734b3c7268": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.011735367s STEP: Saw pod success May 4 16:04:01.729: INFO: Pod "downwardapi-volume-82a77a3c-7717-4fc6-9728-14734b3c7268" satisfied condition "Succeeded or Failed" May 4 16:04:01.731: INFO: Trying to get logs from node node1 pod downwardapi-volume-82a77a3c-7717-4fc6-9728-14734b3c7268 container client-container: STEP: delete the pod May 4 16:04:01.743: INFO: Waiting for pod downwardapi-volume-82a77a3c-7717-4fc6-9728-14734b3c7268 to disappear May 4 16:04:01.745: INFO: Pod downwardapi-volume-82a77a3c-7717-4fc6-9728-14734b3c7268 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:04:01.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9590" for this suite. • [SLOW TEST:6.064 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":44,"failed":0} SSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:03:47.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-e4fdc944-b95d-4e7c-abd5-75fd6190c2d1 May 4 16:03:47.808: INFO: Pod name my-hostname-basic-e4fdc944-b95d-4e7c-abd5-75fd6190c2d1: Found 0 pods out of 1 May 4 16:03:52.811: INFO: Pod name my-hostname-basic-e4fdc944-b95d-4e7c-abd5-75fd6190c2d1: Found 1 pods out of 1 May 4 16:03:52.811: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-e4fdc944-b95d-4e7c-abd5-75fd6190c2d1" are running May 4 16:03:56.816: INFO: Pod "my-hostname-basic-e4fdc944-b95d-4e7c-abd5-75fd6190c2d1-k7prl" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-05-04 16:03:47 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-05-04 16:03:47 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-e4fdc944-b95d-4e7c-abd5-75fd6190c2d1]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-05-04 16:03:47 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-e4fdc944-b95d-4e7c-abd5-75fd6190c2d1]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-05-04 16:03:47 +0000 UTC Reason: Message:}]) May 4 16:03:56.817: INFO: Trying to dial the pod May 4 16:04:01.936: INFO: Controller my-hostname-basic-e4fdc944-b95d-4e7c-abd5-75fd6190c2d1: Got expected result from replica 1 [my-hostname-basic-e4fdc944-b95d-4e7c-abd5-75fd6190c2d1-k7prl]: "my-hostname-basic-e4fdc944-b95d-4e7c-abd5-75fd6190c2d1-k7prl", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:04:01.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4217" for this suite. • [SLOW TEST:14.239 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":2,"skipped":38,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:03:59.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:03:59.736: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:04:05.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5009" for this suite. • [SLOW TEST:6.148 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":-1,"completed":3,"skipped":25,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:03:59.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod May 4 16:04:09.681: INFO: Pod pod-hostip-16acc66e-6ecc-4d4f-aafd-e041ffb11874 has hostIP: 10.10.190.208 [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:04:09.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-917" for this suite. • [SLOW TEST:10.050 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":48,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:03:58.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 4 16:03:59.027: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 4 16:04:01.035: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741039, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741039, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741039, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741039, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 16:04:03.037: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741039, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741039, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741039, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741039, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 16:04:05.038: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741039, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741039, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741039, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741039, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 16:04:07.038: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741039, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741039, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741039, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741039, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 4 16:04:10.044: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 4 16:04:10.060: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:04:10.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-160" for this suite. STEP: Destroying namespace "webhook-160-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.509 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:04:01.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication May 4 16:04:02.245: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 4 16:04:02.260: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 4 16:04:04.268: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741042, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741042, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741042, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741042, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 16:04:06.273: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741042, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741042, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741042, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741042, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 16:04:08.271: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741042, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741042, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741042, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741042, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 16:04:10.273: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741042, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741042, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741042, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741042, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 4 16:04:13.279: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:04:13.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2004" for this suite. STEP: Destroying namespace "webhook-2004-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.609 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":3,"skipped":47,"failed":0} SSS ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":4,"skipped":62,"failed":0} [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:04:10.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 4 16:04:10.155: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f9b60f0e-9a7d-4605-9368-98a7a7b0db8e" in namespace "downward-api-1560" to be "Succeeded or Failed" May 4 16:04:10.157: INFO: Pod "downwardapi-volume-f9b60f0e-9a7d-4605-9368-98a7a7b0db8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209474ms May 4 16:04:12.160: INFO: Pod "downwardapi-volume-f9b60f0e-9a7d-4605-9368-98a7a7b0db8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005016132s May 4 16:04:14.164: INFO: Pod "downwardapi-volume-f9b60f0e-9a7d-4605-9368-98a7a7b0db8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009233999s STEP: Saw pod success May 4 16:04:14.164: INFO: Pod "downwardapi-volume-f9b60f0e-9a7d-4605-9368-98a7a7b0db8e" satisfied condition "Succeeded or Failed" May 4 16:04:14.167: INFO: Trying to get logs from node node1 pod downwardapi-volume-f9b60f0e-9a7d-4605-9368-98a7a7b0db8e container client-container: STEP: delete the pod May 4 16:04:14.181: INFO: Waiting for pod downwardapi-volume-f9b60f0e-9a7d-4605-9368-98a7a7b0db8e to disappear May 4 16:04:14.182: INFO: Pod downwardapi-volume-f9b60f0e-9a7d-4605-9368-98a7a7b0db8e no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:04:14.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1560" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":62,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:03:54.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 4 16:04:08.283: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 4 16:04:08.285: INFO: Pod pod-with-prestop-exec-hook still exists May 4 16:04:10.286: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 4 16:04:10.288: INFO: Pod pod-with-prestop-exec-hook still exists May 4 16:04:12.285: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 4 16:04:12.288: INFO: Pod pod-with-prestop-exec-hook still exists May 4 16:04:14.285: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 4 16:04:14.290: INFO: Pod pod-with-prestop-exec-hook still exists May 4 16:04:16.285: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 4 16:04:16.288: INFO: Pod pod-with-prestop-exec-hook still exists May 4 16:04:18.287: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 4 16:04:18.291: INFO: Pod pod-with-prestop-exec-hook still exists May 4 16:04:20.286: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 4 16:04:20.288: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:04:20.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1792" for this suite. • [SLOW TEST:26.085 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":76,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:04:09.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 4 16:04:10.204: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 4 16:04:12.214: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741050, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741050, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741050, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741050, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 4 16:04:15.223: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:04:15.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:04:21.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-842" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:11.645 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":4,"skipped":49,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:04:21.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events May 4 16:04:21.429: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:04:21.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-3901" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":5,"skipped":76,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:04:21.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server May 4 16:04:21.575: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7314 proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:04:21.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7314" for this suite. • ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:03:55.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-hn49 STEP: Creating a pod to test atomic-volume-subpath May 4 16:03:55.703: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-hn49" in namespace "subpath-9444" to be "Succeeded or Failed" May 4 16:03:55.705: INFO: Pod "pod-subpath-test-projected-hn49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.369934ms May 4 16:03:57.708: INFO: Pod "pod-subpath-test-projected-hn49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005702737s May 4 16:03:59.716: INFO: Pod "pod-subpath-test-projected-hn49": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01278214s May 4 16:04:01.718: INFO: Pod "pod-subpath-test-projected-hn49": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015349303s May 4 16:04:03.721: INFO: Pod "pod-subpath-test-projected-hn49": Phase="Running", Reason="", readiness=true. Elapsed: 8.018149068s May 4 16:04:05.723: INFO: Pod "pod-subpath-test-projected-hn49": Phase="Running", Reason="", readiness=true. Elapsed: 10.020286597s May 4 16:04:07.726: INFO: Pod "pod-subpath-test-projected-hn49": Phase="Running", Reason="", readiness=true. Elapsed: 12.023381702s May 4 16:04:09.731: INFO: Pod "pod-subpath-test-projected-hn49": Phase="Running", Reason="", readiness=true. Elapsed: 14.028471499s May 4 16:04:11.735: INFO: Pod "pod-subpath-test-projected-hn49": Phase="Running", Reason="", readiness=true. Elapsed: 16.032156403s May 4 16:04:13.738: INFO: Pod "pod-subpath-test-projected-hn49": Phase="Running", Reason="", readiness=true. Elapsed: 18.035722024s May 4 16:04:15.743: INFO: Pod "pod-subpath-test-projected-hn49": Phase="Running", Reason="", readiness=true. Elapsed: 20.039925812s May 4 16:04:17.747: INFO: Pod "pod-subpath-test-projected-hn49": Phase="Running", Reason="", readiness=true. Elapsed: 22.044474248s May 4 16:04:19.751: INFO: Pod "pod-subpath-test-projected-hn49": Phase="Running", Reason="", readiness=true. Elapsed: 24.047855288s May 4 16:04:21.754: INFO: Pod "pod-subpath-test-projected-hn49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.050893772s STEP: Saw pod success May 4 16:04:21.754: INFO: Pod "pod-subpath-test-projected-hn49" satisfied condition "Succeeded or Failed" May 4 16:04:21.756: INFO: Trying to get logs from node node2 pod pod-subpath-test-projected-hn49 container test-container-subpath-projected-hn49: STEP: delete the pod May 4 16:04:21.768: INFO: Waiting for pod pod-subpath-test-projected-hn49 to disappear May 4 16:04:21.771: INFO: Pod pod-subpath-test-projected-hn49 no longer exists STEP: Deleting pod pod-subpath-test-projected-hn49 May 4 16:04:21.771: INFO: Deleting pod "pod-subpath-test-projected-hn49" in namespace "subpath-9444" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:04:21.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9444" for this suite. • [SLOW TEST:26.115 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":19,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:04:14.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 4 16:04:20.791: INFO: Successfully updated pod "pod-update-activedeadlineseconds-39f72d94-db9c-417a-8811-4a41cdc72e6c" May 4 16:04:20.791: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-39f72d94-db9c-417a-8811-4a41cdc72e6c" in namespace "pods-3568" to be "terminated due to deadline exceeded" May 4 16:04:20.793: INFO: Pod "pod-update-activedeadlineseconds-39f72d94-db9c-417a-8811-4a41cdc72e6c": Phase="Running", Reason="", readiness=true. Elapsed: 1.947705ms May 4 16:04:22.796: INFO: Pod "pod-update-activedeadlineseconds-39f72d94-db9c-417a-8811-4a41cdc72e6c": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.005120465s May 4 16:04:22.796: INFO: Pod "pod-update-activedeadlineseconds-39f72d94-db9c-417a-8811-4a41cdc72e6c" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:04:22.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3568" for this suite. • [SLOW TEST:8.564 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":83,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:04:20.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-03979b8a-89d4-45d8-8841-b07971753564 STEP: Creating a pod to test consume secrets May 4 16:04:20.346: INFO: Waiting up to 5m0s for pod "pod-secrets-30fefdad-6290-48b4-b313-604a61b2ea37" in namespace "secrets-2505" to be "Succeeded or Failed" May 4 16:04:20.350: INFO: Pod "pod-secrets-30fefdad-6290-48b4-b313-604a61b2ea37": Phase="Pending", Reason="", readiness=false. Elapsed: 4.204934ms May 4 16:04:22.353: INFO: Pod "pod-secrets-30fefdad-6290-48b4-b313-604a61b2ea37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007264924s May 4 16:04:24.356: INFO: Pod "pod-secrets-30fefdad-6290-48b4-b313-604a61b2ea37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010330735s STEP: Saw pod success May 4 16:04:24.356: INFO: Pod "pod-secrets-30fefdad-6290-48b4-b313-604a61b2ea37" satisfied condition "Succeeded or Failed" May 4 16:04:24.359: INFO: Trying to get logs from node node1 pod pod-secrets-30fefdad-6290-48b4-b313-604a61b2ea37 container secret-volume-test: STEP: delete the pod May 4 16:04:24.632: INFO: Waiting for pod pod-secrets-30fefdad-6290-48b4-b313-604a61b2ea37 to disappear May 4 16:04:24.634: INFO: Pod pod-secrets-30fefdad-6290-48b4-b313-604a61b2ea37 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:04:24.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2505" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":77,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:04:24.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:04:24.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8086" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • ------------------------------ {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":4,"skipped":87,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:04:22.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-8cb6b3d2-8333-4455-9a7b-5a8cd979cdbd STEP: Creating a pod to test consume configMaps May 4 16:04:22.862: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9c39cb6e-cbea-4026-8592-ba54a6112bb7" in namespace "projected-3542" to be "Succeeded or Failed" May 4 16:04:22.865: INFO: Pod "pod-projected-configmaps-9c39cb6e-cbea-4026-8592-ba54a6112bb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.539584ms May 4 16:04:24.867: INFO: Pod "pod-projected-configmaps-9c39cb6e-cbea-4026-8592-ba54a6112bb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005120013s May 4 16:04:26.870: INFO: Pod "pod-projected-configmaps-9c39cb6e-cbea-4026-8592-ba54a6112bb7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007888663s STEP: Saw pod success May 4 16:04:26.870: INFO: Pod "pod-projected-configmaps-9c39cb6e-cbea-4026-8592-ba54a6112bb7" satisfied condition "Succeeded or Failed" May 4 16:04:26.872: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-9c39cb6e-cbea-4026-8592-ba54a6112bb7 container projected-configmap-volume-test: STEP: delete the pod May 4 16:04:26.884: INFO: Waiting for pod pod-projected-configmaps-9c39cb6e-cbea-4026-8592-ba54a6112bb7 to disappear May 4 16:04:26.886: INFO: Pod pod-projected-configmaps-9c39cb6e-cbea-4026-8592-ba54a6112bb7 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:04:26.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3542" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":92,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:04:26.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pods May 4 16:04:26.951: INFO: created test-pod-1 May 4 16:04:26.960: INFO: created test-pod-2 May 4 16:04:26.973: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:04:26.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-391" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Pods should delete a collection of pods [Conformance]","total":-1,"completed":8,"skipped":103,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:04:24.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs May 4 16:04:24.778: INFO: Waiting up to 5m0s for pod "pod-3a9b9314-088c-4af4-bea6-543a98c7ca71" in namespace "emptydir-7653" to be "Succeeded or Failed" May 4 16:04:24.781: INFO: Pod "pod-3a9b9314-088c-4af4-bea6-543a98c7ca71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177316ms May 4 16:04:26.783: INFO: Pod "pod-3a9b9314-088c-4af4-bea6-543a98c7ca71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004890623s May 4 16:04:28.787: INFO: Pod "pod-3a9b9314-088c-4af4-bea6-543a98c7ca71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00873835s STEP: Saw pod success May 4 16:04:28.787: INFO: Pod "pod-3a9b9314-088c-4af4-bea6-543a98c7ca71" satisfied condition "Succeeded or Failed" May 4 16:04:28.789: INFO: Trying to get logs from node node2 pod pod-3a9b9314-088c-4af4-bea6-543a98c7ca71 container test-container: STEP: delete the pod May 4 16:04:28.804: INFO: Waiting for pod pod-3a9b9314-088c-4af4-bea6-543a98c7ca71 to disappear May 4 16:04:28.806: INFO: Pod pod-3a9b9314-088c-4af4-bea6-543a98c7ca71 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:04:28.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7653" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":93,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:04:21.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-e9fcfff8-428b-4707-9307-ba30e3228f6a STEP: Creating secret with name s-test-opt-upd-7843cdfb-1b0a-48b9-8871-46fdb209eeb1 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-e9fcfff8-428b-4707-9307-ba30e3228f6a STEP: Updating secret s-test-opt-upd-7843cdfb-1b0a-48b9-8871-46fdb209eeb1 STEP: Creating secret with name s-test-opt-create-5b1f6bd9-b9b9-4421-8033-107ac1623c08 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:04:29.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3909" for this suite. • [SLOW TEST:8.117 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":26,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:04:13.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 4 16:04:13.412: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:04:29.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-229" for this suite. • [SLOW TEST:16.578 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ S ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":50,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:03:51.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 4 16:03:51.709: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 4 16:04:09.278: INFO: >>> kubeConfig: /root/.kube/config May 4 16:04:17.191: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:04:33.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-285" for this suite. • [SLOW TEST:42.140 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":2,"skipped":37,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:04:30.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:04:30.172: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-f204a62a-d30b-4202-b164-2ac2caa485de" in namespace "security-context-test-5167" to be "Succeeded or Failed" May 4 16:04:30.174: INFO: Pod "alpine-nnp-false-f204a62a-d30b-4202-b164-2ac2caa485de": Phase="Pending", Reason="", readiness=false. Elapsed: 1.981256ms May 4 16:04:32.177: INFO: Pod "alpine-nnp-false-f204a62a-d30b-4202-b164-2ac2caa485de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004417681s May 4 16:04:34.181: INFO: Pod "alpine-nnp-false-f204a62a-d30b-4202-b164-2ac2caa485de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008365435s May 4 16:04:34.181: INFO: Pod "alpine-nnp-false-f204a62a-d30b-4202-b164-2ac2caa485de" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:04:34.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5167" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":136,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":-1,"completed":6,"skipped":128,"failed":0} [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:04:21.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:04:37.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1388" for this suite. • [SLOW TEST:16.063 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":7,"skipped":128,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:04:37.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:04:37.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-636" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":8,"skipped":150,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:04:33.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs May 4 16:04:33.900: INFO: Waiting up to 5m0s for pod "pod-79f40751-1003-424d-a360-5f83d6def6bf" in namespace "emptydir-5736" to be "Succeeded or Failed" May 4 16:04:33.903: INFO: Pod "pod-79f40751-1003-424d-a360-5f83d6def6bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.981178ms May 4 16:04:35.906: INFO: Pod "pod-79f40751-1003-424d-a360-5f83d6def6bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006509305s May 4 16:04:37.909: INFO: Pod "pod-79f40751-1003-424d-a360-5f83d6def6bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009441922s STEP: Saw pod success May 4 16:04:37.909: INFO: Pod "pod-79f40751-1003-424d-a360-5f83d6def6bf" satisfied condition "Succeeded or Failed" May 4 16:04:37.911: INFO: Trying to get logs from node node2 pod pod-79f40751-1003-424d-a360-5f83d6def6bf container test-container: STEP: delete the pod May 4 16:04:37.995: INFO: Waiting for pod pod-79f40751-1003-424d-a360-5f83d6def6bf to disappear May 4 16:04:37.997: INFO: Pod pod-79f40751-1003-424d-a360-5f83d6def6bf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:04:37.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5736" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":50,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:04:27.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-d1d88a52-99ff-43ec-9309-3d44d91dc38b STEP: Creating secret with name s-test-opt-upd-e28cc5b9-dac3-4d25-971a-0ab3590cde3c STEP: Creating the pod STEP: Deleting secret s-test-opt-del-d1d88a52-99ff-43ec-9309-3d44d91dc38b STEP: Updating secret s-test-opt-upd-e28cc5b9-dac3-4d25-971a-0ab3590cde3c STEP: Creating secret with name s-test-opt-create-54c064b2-907f-4cb3-acc4-0b0bed019647 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:04:39.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8726" for this suite. • [SLOW TEST:12.113 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":120,"failed":0} SS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:04:39.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token May 4 16:04:39.735: INFO: created pod pod-service-account-defaultsa May 4 16:04:39.735: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 4 16:04:39.745: INFO: created pod pod-service-account-mountsa May 4 16:04:39.745: INFO: pod pod-service-account-mountsa service account token volume mount: true May 4 16:04:39.754: INFO: created pod pod-service-account-nomountsa May 4 16:04:39.754: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 4 16:04:39.763: INFO: created pod pod-service-account-defaultsa-mountspec May 4 16:04:39.763: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 4 16:04:39.822: INFO: created pod pod-service-account-mountsa-mountspec May 4 16:04:39.822: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 4 16:04:39.832: INFO: created pod pod-service-account-nomountsa-mountspec May 4 16:04:39.832: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 4 16:04:39.843: INFO: created pod pod-service-account-defaultsa-nomountspec May 4 16:04:39.843: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 4 16:04:39.851: INFO: created pod pod-service-account-mountsa-nomountspec May 4 16:04:39.851: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 4 16:04:39.861: INFO: created pod pod-service-account-nomountsa-nomountspec May 4 16:04:39.861: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:04:39.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2300" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":-1,"completed":10,"skipped":122,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:04:34.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 4 16:04:34.273: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9581c802-5bd0-4284-adae-76ce9adecc09" in namespace "projected-7032" to be "Succeeded or Failed" May 4 16:04:34.275: INFO: Pod "downwardapi-volume-9581c802-5bd0-4284-adae-76ce9adecc09": Phase="Pending", Reason="", readiness=false. Elapsed: 1.827288ms May 4 16:04:36.277: INFO: Pod "downwardapi-volume-9581c802-5bd0-4284-adae-76ce9adecc09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.003874883s May 4 16:04:38.280: INFO: Pod "downwardapi-volume-9581c802-5bd0-4284-adae-76ce9adecc09": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006778265s May 4 16:04:40.283: INFO: Pod "downwardapi-volume-9581c802-5bd0-4284-adae-76ce9adecc09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.010257467s STEP: Saw pod success May 4 16:04:40.283: INFO: Pod "downwardapi-volume-9581c802-5bd0-4284-adae-76ce9adecc09" satisfied condition "Succeeded or Failed" May 4 16:04:40.286: INFO: Trying to get logs from node node2 pod downwardapi-volume-9581c802-5bd0-4284-adae-76ce9adecc09 container client-container: STEP: delete the pod May 4 16:04:40.299: INFO: Waiting for pod downwardapi-volume-9581c802-5bd0-4284-adae-76ce9adecc09 to disappear May 4 16:04:40.301: INFO: Pod downwardapi-volume-9581c802-5bd0-4284-adae-76ce9adecc09 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:04:40.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7032" for this suite. • [SLOW TEST:6.067 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":156,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:04:28.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1385 STEP: creating an pod May 4 16:04:28.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6813 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.20 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' May 4 16:04:29.042: INFO: stderr: "" May 4 16:04:29.042: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. May 4 16:04:29.042: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 4 16:04:29.042: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-6813" to be "running and ready, or succeeded" May 4 16:04:29.044: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196866ms May 4 16:04:31.047: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00503934s May 4 16:04:33.050: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.007925088s May 4 16:04:33.050: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 4 16:04:33.050: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 4 16:04:33.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6813 logs logs-generator logs-generator' May 4 16:04:33.191: INFO: stderr: "" May 4 16:04:33.192: INFO: stdout: "I0504 16:04:31.683857 1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/vpj8 380\nI0504 16:04:31.883946 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/82l 422\nI0504 16:04:32.083899 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/z6ww 392\nI0504 16:04:32.283926 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/q4zs 466\nI0504 16:04:32.483937 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/mrr 229\nI0504 16:04:32.683916 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/j4r7 591\nI0504 16:04:32.883873 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/4dw 289\nI0504 16:04:33.084049 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/94kl 467\n" STEP: limiting log lines May 4 16:04:33.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6813 logs logs-generator logs-generator --tail=1' May 4 16:04:33.343: INFO: stderr: "" May 4 16:04:33.343: INFO: stdout: "I0504 16:04:33.283944 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/r5s 454\n" May 4 16:04:33.343: INFO: got output "I0504 16:04:33.283944 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/r5s 454\n" STEP: limiting log bytes May 4 16:04:33.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6813 logs logs-generator logs-generator --limit-bytes=1' May 4 16:04:33.503: INFO: stderr: "" May 4 16:04:33.503: INFO: stdout: "I" May 4 16:04:33.504: INFO: got output "I" STEP: exposing timestamps May 4 16:04:33.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6813 logs logs-generator logs-generator --tail=1 --timestamps' May 4 16:04:33.671: INFO: stderr: "" May 4 16:04:33.671: INFO: stdout: "2021-05-04T16:04:33.485104437Z I0504 16:04:33.483933 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/jhvm 548\n" May 4 16:04:33.671: INFO: got output "2021-05-04T16:04:33.485104437Z I0504 16:04:33.483933 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/jhvm 548\n" STEP: restricting to a time range May 4 16:04:36.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6813 logs logs-generator logs-generator --since=1s' May 4 16:04:36.329: INFO: stderr: "" May 4 16:04:36.329: INFO: stdout: "I0504 16:04:35.483926 1 logs_generator.go:76] 19 GET /api/v1/namespaces/kube-system/pods/c7ds 547\nI0504 16:04:35.683922 1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/wk4 485\nI0504 16:04:35.883958 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/6rmd 372\nI0504 16:04:36.084114 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/fqv 237\nI0504 16:04:36.283928 1 logs_generator.go:76] 23 GET /api/v1/namespaces/kube-system/pods/7ht 537\n" May 4 16:04:36.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6813 logs logs-generator logs-generator --since=24h' May 4 16:04:36.488: INFO: stderr: "" May 4 16:04:36.488: INFO: stdout: "I0504 16:04:31.683857 1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/vpj8 380\nI0504 16:04:31.883946 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/82l 422\nI0504 16:04:32.083899 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/z6ww 392\nI0504 16:04:32.283926 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/q4zs 466\nI0504 16:04:32.483937 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/mrr 229\nI0504 16:04:32.683916 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/j4r7 591\nI0504 16:04:32.883873 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/4dw 289\nI0504 16:04:33.084049 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/94kl 467\nI0504 16:04:33.283944 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/r5s 454\nI0504 16:04:33.483933 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/jhvm 548\nI0504 16:04:33.683942 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/w6n 401\nI0504 16:04:33.883955 1 logs_generator.go:76] 11 GET /api/v1/namespaces/kube-system/pods/mj8c 335\nI0504 16:04:34.083933 1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/p7q 242\nI0504 16:04:34.283978 1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/9sz 413\nI0504 16:04:34.484054 1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/gn9c 281\nI0504 16:04:34.683998 1 logs_generator.go:76] 15 GET /api/v1/namespaces/default/pods/mbn 398\nI0504 16:04:34.884002 1 logs_generator.go:76] 16 GET /api/v1/namespaces/default/pods/4dz8 511\nI0504 16:04:35.083941 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/29h 449\nI0504 16:04:35.284000 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/4sqf 440\nI0504 16:04:35.483926 1 logs_generator.go:76] 19 GET /api/v1/namespaces/kube-system/pods/c7ds 547\nI0504 16:04:35.683922 1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/wk4 485\nI0504 16:04:35.883958 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/6rmd 372\nI0504 16:04:36.084114 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/fqv 237\nI0504 16:04:36.283928 1 logs_generator.go:76] 23 GET /api/v1/namespaces/kube-system/pods/7ht 537\nI0504 16:04:36.483939 1 logs_generator.go:76] 24 POST /api/v1/namespaces/default/pods/mhf 231\n" [AfterEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1390 May 4 16:04:36.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6813 delete pod logs-generator' May 4 16:04:42.619: INFO: stderr: "" May 4 16:04:42.619: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:04:42.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6813" for this suite. • [SLOW TEST:13.765 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1382 should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":-1,"completed":6,"skipped":114,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:04:38.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:04:38.041: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:04:43.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4849" for this suite. • [SLOW TEST:5.564 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":-1,"completed":4,"skipped":54,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:04:37.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-a8dc8ecd-afd0-4c9c-9325-5a1c45c05150 STEP: Creating a pod to test consume configMaps May 4 16:04:37.931: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0a2d5cb1-9241-45c8-aff0-58e025f8eddf" in namespace "projected-3848" to be "Succeeded or Failed" May 4 16:04:37.934: INFO: Pod "pod-projected-configmaps-0a2d5cb1-9241-45c8-aff0-58e025f8eddf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086737ms May 4 16:04:39.936: INFO: Pod "pod-projected-configmaps-0a2d5cb1-9241-45c8-aff0-58e025f8eddf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00448905s May 4 16:04:41.939: INFO: Pod "pod-projected-configmaps-0a2d5cb1-9241-45c8-aff0-58e025f8eddf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007490478s May 4 16:04:43.942: INFO: Pod "pod-projected-configmaps-0a2d5cb1-9241-45c8-aff0-58e025f8eddf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010301855s May 4 16:04:45.944: INFO: Pod "pod-projected-configmaps-0a2d5cb1-9241-45c8-aff0-58e025f8eddf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.012745471s STEP: Saw pod success May 4 16:04:45.944: INFO: Pod "pod-projected-configmaps-0a2d5cb1-9241-45c8-aff0-58e025f8eddf" satisfied condition "Succeeded or Failed" May 4 16:04:45.946: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-0a2d5cb1-9241-45c8-aff0-58e025f8eddf container projected-configmap-volume-test: STEP: delete the pod May 4 16:04:45.958: INFO: Waiting for pod pod-projected-configmaps-0a2d5cb1-9241-45c8-aff0-58e025f8eddf to disappear May 4 16:04:45.960: INFO: Pod pod-projected-configmaps-0a2d5cb1-9241-45c8-aff0-58e025f8eddf no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:04:45.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3848" for this suite. • [SLOW TEST:8.067 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":154,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:04:40.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-da98196c-2b29-47a1-bef6-8d444b9bcb18 STEP: Creating a pod to test consume configMaps May 4 16:04:40.376: INFO: Waiting up to 5m0s for pod "pod-configmaps-aa0d138f-8031-4bbc-aafa-4843ee52a149" in namespace "configmap-9171" to be "Succeeded or Failed" May 4 16:04:40.381: INFO: Pod "pod-configmaps-aa0d138f-8031-4bbc-aafa-4843ee52a149": Phase="Pending", Reason="", readiness=false. Elapsed: 5.145711ms May 4 16:04:42.385: INFO: Pod "pod-configmaps-aa0d138f-8031-4bbc-aafa-4843ee52a149": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008908173s May 4 16:04:44.389: INFO: Pod "pod-configmaps-aa0d138f-8031-4bbc-aafa-4843ee52a149": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013105832s May 4 16:04:46.392: INFO: Pod "pod-configmaps-aa0d138f-8031-4bbc-aafa-4843ee52a149": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016048647s May 4 16:04:48.396: INFO: Pod "pod-configmaps-aa0d138f-8031-4bbc-aafa-4843ee52a149": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019859442s May 4 16:04:50.399: INFO: Pod "pod-configmaps-aa0d138f-8031-4bbc-aafa-4843ee52a149": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.023797102s STEP: Saw pod success May 4 16:04:50.400: INFO: Pod "pod-configmaps-aa0d138f-8031-4bbc-aafa-4843ee52a149" satisfied condition "Succeeded or Failed" May 4 16:04:50.402: INFO: Trying to get logs from node node1 pod pod-configmaps-aa0d138f-8031-4bbc-aafa-4843ee52a149 container configmap-volume-test: STEP: delete the pod May 4 16:04:50.416: INFO: Waiting for pod pod-configmaps-aa0d138f-8031-4bbc-aafa-4843ee52a149 to disappear May 4 16:04:50.418: INFO: Pod pod-configmaps-aa0d138f-8031-4bbc-aafa-4843ee52a149 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:04:50.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9171" for this suite. • [SLOW TEST:10.086 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":168,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:04:43.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 4 16:04:43.626: INFO: Waiting up to 5m0s for pod "pod-37f550cd-45b6-4fd1-9933-3251e316dcf2" in namespace "emptydir-4060" to be "Succeeded or Failed" May 4 16:04:43.631: INFO: Pod "pod-37f550cd-45b6-4fd1-9933-3251e316dcf2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.318529ms May 4 16:04:45.634: INFO: Pod "pod-37f550cd-45b6-4fd1-9933-3251e316dcf2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007070223s May 4 16:04:47.636: INFO: Pod "pod-37f550cd-45b6-4fd1-9933-3251e316dcf2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009952329s May 4 16:04:49.639: INFO: Pod "pod-37f550cd-45b6-4fd1-9933-3251e316dcf2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012711457s May 4 16:04:51.642: INFO: Pod "pod-37f550cd-45b6-4fd1-9933-3251e316dcf2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.01528408s May 4 16:04:53.645: INFO: Pod "pod-37f550cd-45b6-4fd1-9933-3251e316dcf2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.018234637s STEP: Saw pod success May 4 16:04:53.645: INFO: Pod "pod-37f550cd-45b6-4fd1-9933-3251e316dcf2" satisfied condition "Succeeded or Failed" May 4 16:04:53.648: INFO: Trying to get logs from node node2 pod pod-37f550cd-45b6-4fd1-9933-3251e316dcf2 container test-container: STEP: delete the pod May 4 16:04:53.660: INFO: Waiting for pod pod-37f550cd-45b6-4fd1-9933-3251e316dcf2 to disappear May 4 16:04:53.662: INFO: Pod pod-37f550cd-45b6-4fd1-9933-3251e316dcf2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:04:53.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4060" for this suite. • [SLOW TEST:10.076 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":56,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:04:45.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-ea177749-c082-47bb-a846-7148fad663c3 STEP: Creating a pod to test consume secrets May 4 16:04:46.017: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9c581f38-0c10-4cfc-9fc9-867f1eda7cd0" in namespace "projected-1537" to be "Succeeded or Failed" May 4 16:04:46.021: INFO: Pod "pod-projected-secrets-9c581f38-0c10-4cfc-9fc9-867f1eda7cd0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.233992ms May 4 16:04:48.026: INFO: Pod "pod-projected-secrets-9c581f38-0c10-4cfc-9fc9-867f1eda7cd0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009232819s May 4 16:04:50.029: INFO: Pod "pod-projected-secrets-9c581f38-0c10-4cfc-9fc9-867f1eda7cd0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011917723s May 4 16:04:52.031: INFO: Pod "pod-projected-secrets-9c581f38-0c10-4cfc-9fc9-867f1eda7cd0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014344748s May 4 16:04:54.034: INFO: Pod "pod-projected-secrets-9c581f38-0c10-4cfc-9fc9-867f1eda7cd0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.017345007s STEP: Saw pod success May 4 16:04:54.034: INFO: Pod "pod-projected-secrets-9c581f38-0c10-4cfc-9fc9-867f1eda7cd0" satisfied condition "Succeeded or Failed" May 4 16:04:54.037: INFO: Trying to get logs from node node1 pod pod-projected-secrets-9c581f38-0c10-4cfc-9fc9-867f1eda7cd0 container projected-secret-volume-test: STEP: delete the pod May 4 16:04:54.050: INFO: Waiting for pod pod-projected-secrets-9c581f38-0c10-4cfc-9fc9-867f1eda7cd0 to disappear May 4 16:04:54.052: INFO: Pod pod-projected-secrets-9c581f38-0c10-4cfc-9fc9-867f1eda7cd0 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:04:54.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1537" for this suite. • [SLOW TEST:8.080 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":158,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:04:39.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-6595 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6595 to expose endpoints map[] May 4 16:04:39.944: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found May 4 16:04:40.949: INFO: successfully validated that service multi-endpoint-test in namespace services-6595 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-6595 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6595 to expose endpoints map[pod1:[100]] May 4 16:04:44.970: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]], will retry May 4 16:04:49.971: INFO: successfully validated that service multi-endpoint-test in namespace services-6595 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-6595 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6595 to expose endpoints map[pod1:[100] pod2:[101]] May 4 16:04:53.989: INFO: Unexpected endpoints: found map[3516df0b-903f-4039-8520-d6646caa05d0:[100]], expected map[pod1:[100] pod2:[101]], will retry May 4 16:04:56.992: INFO: successfully validated that service multi-endpoint-test in namespace services-6595 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-6595 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6595 to expose endpoints map[pod2:[101]] May 4 16:04:57.007: INFO: successfully validated that service multi-endpoint-test in namespace services-6595 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-6595 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6595 to expose endpoints map[] May 4 16:04:57.019: INFO: successfully validated that service multi-endpoint-test in namespace services-6595 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:04:57.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6595" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:17.123 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":-1,"completed":11,"skipped":142,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:04:53.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-0cbeb0b0-f4a2-4a36-9039-944970584065 STEP: Creating a pod to test consume secrets May 4 16:04:53.717: INFO: Waiting up to 5m0s for pod "pod-secrets-48ef76c1-4827-48a0-b535-f57c45caf94d" in namespace "secrets-2996" to be "Succeeded or Failed" May 4 16:04:53.723: INFO: Pod "pod-secrets-48ef76c1-4827-48a0-b535-f57c45caf94d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.592053ms May 4 16:04:55.725: INFO: Pod "pod-secrets-48ef76c1-4827-48a0-b535-f57c45caf94d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007840633s May 4 16:04:57.728: INFO: Pod "pod-secrets-48ef76c1-4827-48a0-b535-f57c45caf94d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010610435s May 4 16:04:59.730: INFO: Pod "pod-secrets-48ef76c1-4827-48a0-b535-f57c45caf94d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012928889s STEP: Saw pod success May 4 16:04:59.730: INFO: Pod "pod-secrets-48ef76c1-4827-48a0-b535-f57c45caf94d" satisfied condition "Succeeded or Failed" May 4 16:04:59.732: INFO: Trying to get logs from node node2 pod pod-secrets-48ef76c1-4827-48a0-b535-f57c45caf94d container secret-volume-test: STEP: delete the pod May 4 16:04:59.865: INFO: Waiting for pod pod-secrets-48ef76c1-4827-48a0-b535-f57c45caf94d to disappear May 4 16:04:59.866: INFO: Pod pod-secrets-48ef76c1-4827-48a0-b535-f57c45caf94d no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:04:59.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2996" for this suite. • [SLOW TEST:6.191 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":60,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:04:01.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-52770847-070e-41c1-90bc-0e1343c35473 in namespace container-probe-9631 May 4 16:04:06.025: INFO: Started pod busybox-52770847-070e-41c1-90bc-0e1343c35473 in namespace container-probe-9631 STEP: checking the pod's current state and verifying that restartCount is present May 4 16:04:06.027: INFO: Initial restart count of pod busybox-52770847-070e-41c1-90bc-0e1343c35473 is 0 May 4 16:05:00.109: INFO: Restart count of pod container-probe-9631/busybox-52770847-070e-41c1-90bc-0e1343c35473 is now 1 (54.081769406s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:05:00.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9631" for this suite. • [SLOW TEST:58.138 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":68,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:04:54.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-204bc82a-13bc-4855-ac5e-ffb598936108 STEP: Creating a pod to test consume secrets May 4 16:04:54.185: INFO: Waiting up to 5m0s for pod "pod-secrets-e32b0acf-8fdf-42f9-b36d-757c8a5756be" in namespace "secrets-2625" to be "Succeeded or Failed" May 4 16:04:54.189: INFO: Pod "pod-secrets-e32b0acf-8fdf-42f9-b36d-757c8a5756be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.18333ms May 4 16:04:56.193: INFO: Pod "pod-secrets-e32b0acf-8fdf-42f9-b36d-757c8a5756be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00751585s May 4 16:04:58.197: INFO: Pod "pod-secrets-e32b0acf-8fdf-42f9-b36d-757c8a5756be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011408797s May 4 16:05:00.199: INFO: Pod "pod-secrets-e32b0acf-8fdf-42f9-b36d-757c8a5756be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013758108s STEP: Saw pod success May 4 16:05:00.199: INFO: Pod "pod-secrets-e32b0acf-8fdf-42f9-b36d-757c8a5756be" satisfied condition "Succeeded or Failed" May 4 16:05:00.201: INFO: Trying to get logs from node node2 pod pod-secrets-e32b0acf-8fdf-42f9-b36d-757c8a5756be container secret-volume-test: STEP: delete the pod May 4 16:05:00.215: INFO: Waiting for pod pod-secrets-e32b0acf-8fdf-42f9-b36d-757c8a5756be to disappear May 4 16:05:00.218: INFO: Pod pod-secrets-e32b0acf-8fdf-42f9-b36d-757c8a5756be no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:05:00.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2625" for this suite. • [SLOW TEST:6.087 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:04:57.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:05:03.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8041" for this suite. • [SLOW TEST:6.061 seconds] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a busybox Pod with hostAliases /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:137 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":156,"failed":0} S ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":200,"failed":0} [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:05:00.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-62dca0ba-5bb5-404a-b901-7be40b30b398 STEP: Creating a pod to test consume secrets May 4 16:05:00.271: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-64766be2-9dc2-4f7e-8a49-23b1b44a28a0" in namespace "projected-3407" to be "Succeeded or Failed" May 4 16:05:00.273: INFO: Pod "pod-projected-secrets-64766be2-9dc2-4f7e-8a49-23b1b44a28a0": Phase="Pending", Reason="", readiness=false. Elapsed: 1.905597ms May 4 16:05:02.276: INFO: Pod "pod-projected-secrets-64766be2-9dc2-4f7e-8a49-23b1b44a28a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004871436s May 4 16:05:04.279: INFO: Pod "pod-projected-secrets-64766be2-9dc2-4f7e-8a49-23b1b44a28a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007800002s STEP: Saw pod success May 4 16:05:04.279: INFO: Pod "pod-projected-secrets-64766be2-9dc2-4f7e-8a49-23b1b44a28a0" satisfied condition "Succeeded or Failed" May 4 16:05:04.282: INFO: Trying to get logs from node node2 pod pod-projected-secrets-64766be2-9dc2-4f7e-8a49-23b1b44a28a0 container projected-secret-volume-test: STEP: delete the pod May 4 16:05:04.294: INFO: Waiting for pod pod-projected-secrets-64766be2-9dc2-4f7e-8a49-23b1b44a28a0 to disappear May 4 16:05:04.296: INFO: Pod pod-projected-secrets-64766be2-9dc2-4f7e-8a49-23b1b44a28a0 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:05:04.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3407" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":200,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:04:42.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD May 4 16:04:42.662: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:05:04.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9144" for this suite. • [SLOW TEST:22.028 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":7,"skipped":119,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:05:00.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 4 16:05:00.557: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 4 16:05:02.565: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741100, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741100, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741100, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741100, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 16:05:04.568: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741100, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741100, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741100, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741100, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 4 16:05:07.575: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:05:07.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5112" for this suite. STEP: Destroying namespace "webhook-5112-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.522 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":4,"skipped":71,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:05:04.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command May 4 16:05:04.377: INFO: Waiting up to 5m0s for pod "client-containers-190567f6-383c-430f-aa58-22994e83f1cf" in namespace "containers-7319" to be "Succeeded or Failed" May 4 16:05:04.382: INFO: Pod "client-containers-190567f6-383c-430f-aa58-22994e83f1cf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.461799ms May 4 16:05:06.385: INFO: Pod "client-containers-190567f6-383c-430f-aa58-22994e83f1cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007467441s May 4 16:05:08.388: INFO: Pod "client-containers-190567f6-383c-430f-aa58-22994e83f1cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010420107s STEP: Saw pod success May 4 16:05:08.388: INFO: Pod "client-containers-190567f6-383c-430f-aa58-22994e83f1cf" satisfied condition "Succeeded or Failed" May 4 16:05:08.391: INFO: Trying to get logs from node node2 pod client-containers-190567f6-383c-430f-aa58-22994e83f1cf container test-container: STEP: delete the pod May 4 16:05:08.402: INFO: Waiting for pod client-containers-190567f6-383c-430f-aa58-22994e83f1cf to disappear May 4 16:05:08.404: INFO: Pod client-containers-190567f6-383c-430f-aa58-22994e83f1cf no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:05:08.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7319" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":216,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:05:03.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:05:03.151: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:05:09.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7231" for this suite. • [SLOW TEST:6.044 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":-1,"completed":13,"skipped":157,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:05:09.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 4 16:05:09.279: INFO: Waiting up to 5m0s for pod "pod-702efad9-893e-4055-8aa7-19c33edfc9b2" in namespace "emptydir-8051" to be "Succeeded or Failed" May 4 16:05:09.281: INFO: Pod "pod-702efad9-893e-4055-8aa7-19c33edfc9b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.534355ms May 4 16:05:11.284: INFO: Pod "pod-702efad9-893e-4055-8aa7-19c33edfc9b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005727222s May 4 16:05:13.288: INFO: Pod "pod-702efad9-893e-4055-8aa7-19c33edfc9b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008897173s STEP: Saw pod success May 4 16:05:13.288: INFO: Pod "pod-702efad9-893e-4055-8aa7-19c33edfc9b2" satisfied condition "Succeeded or Failed" May 4 16:05:13.290: INFO: Trying to get logs from node node1 pod pod-702efad9-893e-4055-8aa7-19c33edfc9b2 container test-container: STEP: delete the pod May 4 16:05:13.302: INFO: Waiting for pod pod-702efad9-893e-4055-8aa7-19c33edfc9b2 to disappear May 4 16:05:13.304: INFO: Pod pod-702efad9-893e-4055-8aa7-19c33edfc9b2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:05:13.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8051" for this suite. • ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:03:59.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-5258 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-5258 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5258 May 4 16:03:59.794: INFO: Found 0 stateful pods, waiting for 1 May 4 16:04:09.797: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 4 16:04:19.800: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 4 16:04:19.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5258 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 4 16:04:20.085: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 4 16:04:20.085: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 4 16:04:20.085: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 4 16:04:20.087: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 4 16:04:30.090: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 4 16:04:30.090: INFO: Waiting for statefulset status.replicas updated to 0 May 4 16:04:30.100: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:04:30.100: INFO: ss-0 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:03:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:03:59 +0000 UTC }] May 4 16:04:30.100: INFO: May 4 16:04:30.100: INFO: StatefulSet ss has not reached scale 3, at 1 May 4 16:04:31.105: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.997423075s May 4 16:04:32.108: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.992935972s May 4 16:04:33.111: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.990373574s May 4 16:04:34.115: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.986989435s May 4 16:04:35.118: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.983126422s May 4 16:04:36.122: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.979433429s May 4 16:04:37.125: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.975993445s May 4 16:04:38.129: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.972706319s May 4 16:04:39.132: INFO: Verifying statefulset ss doesn't scale past 3 for another 969.194089ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5258 May 4 16:04:40.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5258 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 16:04:40.425: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 4 16:04:40.425: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 4 16:04:40.425: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 4 16:04:40.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5258 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 16:04:41.709: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" May 4 16:04:41.709: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 4 16:04:41.709: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 4 16:04:41.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5258 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 16:04:42.434: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" May 4 16:04:42.434: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 4 16:04:42.434: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 4 16:04:42.438: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 4 16:04:52.442: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 4 16:04:52.442: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 4 16:04:52.442: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 4 16:04:52.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5258 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 4 16:04:52.891: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 4 16:04:52.891: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 4 16:04:52.891: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 4 16:04:52.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5258 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 4 16:04:53.154: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 4 16:04:53.154: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 4 16:04:53.154: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 4 16:04:53.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5258 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 4 16:04:53.422: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 4 16:04:53.422: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 4 16:04:53.422: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 4 16:04:53.422: INFO: Waiting for statefulset status.replicas updated to 0 May 4 16:04:53.426: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 4 16:05:03.432: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 4 16:05:03.432: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 4 16:05:03.432: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 4 16:05:03.441: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:05:03.441: INFO: ss-0 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:03:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:03:59 +0000 UTC }] May 4 16:05:03.441: INFO: ss-1 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:30 +0000 UTC }] May 4 16:05:03.442: INFO: ss-2 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:30 +0000 UTC }] May 4 16:05:03.442: INFO: May 4 16:05:03.442: INFO: StatefulSet ss has not reached scale 0, at 3 May 4 16:05:04.445: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:05:04.445: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:03:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:03:59 +0000 UTC }] May 4 16:05:04.445: INFO: ss-1 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:30 +0000 UTC }] May 4 16:05:04.445: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:30 +0000 UTC }] May 4 16:05:04.445: INFO: May 4 16:05:04.445: INFO: StatefulSet ss has not reached scale 0, at 3 May 4 16:05:05.449: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:05:05.449: INFO: ss-0 node2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:03:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:03:59 +0000 UTC }] May 4 16:05:05.449: INFO: ss-1 node1 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:30 +0000 UTC }] May 4 16:05:05.449: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:30 +0000 UTC }] May 4 16:05:05.449: INFO: May 4 16:05:05.449: INFO: StatefulSet ss has not reached scale 0, at 3 May 4 16:05:06.452: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:05:06.452: INFO: ss-0 node2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:03:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:03:59 +0000 UTC }] May 4 16:05:06.452: INFO: ss-1 node1 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:30 +0000 UTC }] May 4 16:05:06.452: INFO: ss-2 node1 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:30 +0000 UTC }] May 4 16:05:06.453: INFO: May 4 16:05:06.453: INFO: StatefulSet ss has not reached scale 0, at 3 May 4 16:05:07.455: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:05:07.455: INFO: ss-0 node2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:03:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:03:59 +0000 UTC }] May 4 16:05:07.455: INFO: ss-2 node1 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:30 +0000 UTC }] May 4 16:05:07.455: INFO: May 4 16:05:07.455: INFO: StatefulSet ss has not reached scale 0, at 2 May 4 16:05:08.458: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:05:08.458: INFO: ss-0 node2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:03:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:03:59 +0000 UTC }] May 4 16:05:08.458: INFO: ss-2 node1 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:30 +0000 UTC }] May 4 16:05:08.458: INFO: May 4 16:05:08.458: INFO: StatefulSet ss has not reached scale 0, at 2 May 4 16:05:09.461: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:05:09.461: INFO: ss-0 node2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:03:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:03:59 +0000 UTC }] May 4 16:05:09.462: INFO: ss-2 node1 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:30 +0000 UTC }] May 4 16:05:09.462: INFO: May 4 16:05:09.462: INFO: StatefulSet ss has not reached scale 0, at 2 May 4 16:05:10.464: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.977075981s May 4 16:05:11.467: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.97451616s May 4 16:05:12.470: INFO: Verifying statefulset ss doesn't scale past 0 for another 971.305656ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5258 May 4 16:05:13.473: INFO: Scaling statefulset ss to 0 May 4 16:05:13.480: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 4 16:05:13.482: INFO: Deleting all statefulset in ns statefulset-5258 May 4 16:05:13.484: INFO: Scaling statefulset ss to 0 May 4 16:05:13.491: INFO: Waiting for statefulset status.replicas updated to 0 May 4 16:05:13.493: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:05:13.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5258" for this suite. • [SLOW TEST:73.752 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":-1,"completed":2,"skipped":57,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:04:50.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-dw79 STEP: Creating a pod to test atomic-volume-subpath May 4 16:04:50.530: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-dw79" in namespace "subpath-5377" to be "Succeeded or Failed" May 4 16:04:50.534: INFO: Pod "pod-subpath-test-configmap-dw79": Phase="Pending", Reason="", readiness=false. Elapsed: 3.623411ms May 4 16:04:52.537: INFO: Pod "pod-subpath-test-configmap-dw79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007056183s May 4 16:04:54.540: INFO: Pod "pod-subpath-test-configmap-dw79": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010152794s May 4 16:04:56.544: INFO: Pod "pod-subpath-test-configmap-dw79": Phase="Running", Reason="", readiness=true. Elapsed: 6.01407168s May 4 16:04:58.548: INFO: Pod "pod-subpath-test-configmap-dw79": Phase="Running", Reason="", readiness=true. Elapsed: 8.017752746s May 4 16:05:00.551: INFO: Pod "pod-subpath-test-configmap-dw79": Phase="Running", Reason="", readiness=true. Elapsed: 10.021037596s May 4 16:05:02.554: INFO: Pod "pod-subpath-test-configmap-dw79": Phase="Running", Reason="", readiness=true. Elapsed: 12.024495595s May 4 16:05:04.557: INFO: Pod "pod-subpath-test-configmap-dw79": Phase="Running", Reason="", readiness=true. Elapsed: 14.027105107s May 4 16:05:06.560: INFO: Pod "pod-subpath-test-configmap-dw79": Phase="Running", Reason="", readiness=true. Elapsed: 16.030511174s May 4 16:05:08.565: INFO: Pod "pod-subpath-test-configmap-dw79": Phase="Running", Reason="", readiness=true. Elapsed: 18.034589286s May 4 16:05:10.568: INFO: Pod "pod-subpath-test-configmap-dw79": Phase="Running", Reason="", readiness=true. Elapsed: 20.03798375s May 4 16:05:12.572: INFO: Pod "pod-subpath-test-configmap-dw79": Phase="Running", Reason="", readiness=true. Elapsed: 22.042459745s May 4 16:05:14.576: INFO: Pod "pod-subpath-test-configmap-dw79": Phase="Running", Reason="", readiness=true. Elapsed: 24.045944639s May 4 16:05:16.579: INFO: Pod "pod-subpath-test-configmap-dw79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.049393668s STEP: Saw pod success May 4 16:05:16.579: INFO: Pod "pod-subpath-test-configmap-dw79" satisfied condition "Succeeded or Failed" May 4 16:05:16.581: INFO: Trying to get logs from node node2 pod pod-subpath-test-configmap-dw79 container test-container-subpath-configmap-dw79: STEP: delete the pod May 4 16:05:16.594: INFO: Waiting for pod pod-subpath-test-configmap-dw79 to disappear May 4 16:05:16.596: INFO: Pod pod-subpath-test-configmap-dw79 no longer exists STEP: Deleting pod pod-subpath-test-configmap-dw79 May 4 16:05:16.596: INFO: Deleting pod "pod-subpath-test-configmap-dw79" in namespace "subpath-5377" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:05:16.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5377" for this suite. • [SLOW TEST:26.117 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":7,"skipped":200,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:05:16.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:05:16.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8462" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":8,"skipped":207,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:05:13.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-fd5b6a6d-3d92-4b5f-86ed-d4a4b21f01e9 STEP: Creating a pod to test consume configMaps May 4 16:05:13.576: INFO: Waiting up to 5m0s for pod "pod-configmaps-bdeee70d-fcff-4265-aa86-9f61f731ebc2" in namespace "configmap-8154" to be "Succeeded or Failed" May 4 16:05:13.578: INFO: Pod "pod-configmaps-bdeee70d-fcff-4265-aa86-9f61f731ebc2": Phase="Pending", Reason="", readiness=false. Elapsed: 1.865054ms May 4 16:05:15.581: INFO: Pod "pod-configmaps-bdeee70d-fcff-4265-aa86-9f61f731ebc2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004916953s May 4 16:05:17.585: INFO: Pod "pod-configmaps-bdeee70d-fcff-4265-aa86-9f61f731ebc2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008700827s STEP: Saw pod success May 4 16:05:17.585: INFO: Pod "pod-configmaps-bdeee70d-fcff-4265-aa86-9f61f731ebc2" satisfied condition "Succeeded or Failed" May 4 16:05:17.587: INFO: Trying to get logs from node node1 pod pod-configmaps-bdeee70d-fcff-4265-aa86-9f61f731ebc2 container configmap-volume-test: STEP: delete the pod May 4 16:05:17.600: INFO: Waiting for pod pod-configmaps-bdeee70d-fcff-4265-aa86-9f61f731ebc2 to disappear May 4 16:05:17.602: INFO: Pod pod-configmaps-bdeee70d-fcff-4265-aa86-9f61f731ebc2 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:05:17.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8154" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":60,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:04:59.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-531 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-531 STEP: creating replication controller externalsvc in namespace services-531 I0504 16:04:59.997992 32 runners.go:190] Created replication controller with name: externalsvc, namespace: services-531, replica count: 2 I0504 16:05:03.048519 32 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0504 16:05:06.048916 32 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 4 16:05:06.060: INFO: Creating new exec pod May 4 16:05:12.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-531 exec execpod29mt5 -- /bin/sh -x -c nslookup clusterip-service.services-531.svc.cluster.local' May 4 16:05:12.340: INFO: stderr: "+ nslookup clusterip-service.services-531.svc.cluster.local\n" May 4 16:05:12.340: INFO: stdout: "Server:\t\t10.233.0.3\nAddress:\t10.233.0.3#53\n\nclusterip-service.services-531.svc.cluster.local\tcanonical name = externalsvc.services-531.svc.cluster.local.\nName:\texternalsvc.services-531.svc.cluster.local\nAddress: 10.233.36.168\n\n" STEP: deleting ReplicationController externalsvc in namespace services-531, will wait for the garbage collector to delete the pods May 4 16:05:12.397: INFO: Deleting ReplicationController externalsvc took: 4.42178ms May 4 16:05:12.497: INFO: Terminating ReplicationController externalsvc pods took: 100.380473ms May 4 16:05:18.609: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:05:18.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-531" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:18.665 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":7,"skipped":104,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:05:16.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments May 4 16:05:16.845: INFO: Waiting up to 5m0s for pod "client-containers-2cce7488-84d9-4b1d-8cda-84d5c542bf5f" in namespace "containers-1059" to be "Succeeded or Failed" May 4 16:05:16.849: INFO: Pod "client-containers-2cce7488-84d9-4b1d-8cda-84d5c542bf5f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.265498ms May 4 16:05:18.852: INFO: Pod "client-containers-2cce7488-84d9-4b1d-8cda-84d5c542bf5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007461224s May 4 16:05:20.855: INFO: Pod "client-containers-2cce7488-84d9-4b1d-8cda-84d5c542bf5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010000677s STEP: Saw pod success May 4 16:05:20.855: INFO: Pod "client-containers-2cce7488-84d9-4b1d-8cda-84d5c542bf5f" satisfied condition "Succeeded or Failed" May 4 16:05:20.857: INFO: Trying to get logs from node node2 pod client-containers-2cce7488-84d9-4b1d-8cda-84d5c542bf5f container test-container: STEP: delete the pod May 4 16:05:20.870: INFO: Waiting for pod client-containers-2cce7488-84d9-4b1d-8cda-84d5c542bf5f to disappear May 4 16:05:20.872: INFO: Pod client-containers-2cce7488-84d9-4b1d-8cda-84d5c542bf5f no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:05:20.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1059" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":254,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:05:17.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 4 16:05:17.770: INFO: Waiting up to 5m0s for pod "downwardapi-volume-31fd6bc1-2637-4f07-b7a4-baf1071ebd7d" in namespace "downward-api-4931" to be "Succeeded or Failed" May 4 16:05:17.772: INFO: Pod "downwardapi-volume-31fd6bc1-2637-4f07-b7a4-baf1071ebd7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215048ms May 4 16:05:19.775: INFO: Pod "downwardapi-volume-31fd6bc1-2637-4f07-b7a4-baf1071ebd7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00489411s May 4 16:05:21.777: INFO: Pod "downwardapi-volume-31fd6bc1-2637-4f07-b7a4-baf1071ebd7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007684237s STEP: Saw pod success May 4 16:05:21.777: INFO: Pod "downwardapi-volume-31fd6bc1-2637-4f07-b7a4-baf1071ebd7d" satisfied condition "Succeeded or Failed" May 4 16:05:21.779: INFO: Trying to get logs from node node1 pod downwardapi-volume-31fd6bc1-2637-4f07-b7a4-baf1071ebd7d container client-container: STEP: delete the pod May 4 16:05:21.791: INFO: Waiting for pod downwardapi-volume-31fd6bc1-2637-4f07-b7a4-baf1071ebd7d to disappear May 4 16:05:21.793: INFO: Pod downwardapi-volume-31fd6bc1-2637-4f07-b7a4-baf1071ebd7d no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:05:21.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4931" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":121,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:05:18.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-c22950d8-e9b3-4dda-95f7-8209166243a4 STEP: Creating a pod to test consume secrets May 4 16:05:18.684: INFO: Waiting up to 5m0s for pod "pod-secrets-7f8cb6d3-03dc-45d6-8951-e9163d3f806c" in namespace "secrets-4434" to be "Succeeded or Failed" May 4 16:05:18.686: INFO: Pod "pod-secrets-7f8cb6d3-03dc-45d6-8951-e9163d3f806c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.765919ms May 4 16:05:20.691: INFO: Pod "pod-secrets-7f8cb6d3-03dc-45d6-8951-e9163d3f806c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007039019s May 4 16:05:22.695: INFO: Pod "pod-secrets-7f8cb6d3-03dc-45d6-8951-e9163d3f806c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010968344s STEP: Saw pod success May 4 16:05:22.695: INFO: Pod "pod-secrets-7f8cb6d3-03dc-45d6-8951-e9163d3f806c" satisfied condition "Succeeded or Failed" May 4 16:05:22.698: INFO: Trying to get logs from node node2 pod pod-secrets-7f8cb6d3-03dc-45d6-8951-e9163d3f806c container secret-volume-test: STEP: delete the pod May 4 16:05:22.712: INFO: Waiting for pod pod-secrets-7f8cb6d3-03dc-45d6-8951-e9163d3f806c to disappear May 4 16:05:22.714: INFO: Pod pod-secrets-7f8cb6d3-03dc-45d6-8951-e9163d3f806c no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:05:22.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4434" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":112,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":190,"failed":0} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:05:13.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:299 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller May 4 16:05:13.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3645 create -f -' May 4 16:05:13.671: INFO: stderr: "" May 4 16:05:13.671: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 4 16:05:13.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3645 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 4 16:05:13.827: INFO: stderr: "" May 4 16:05:13.827: INFO: stdout: "update-demo-nautilus-gtkhv update-demo-nautilus-rt5pl " May 4 16:05:13.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3645 get pods update-demo-nautilus-gtkhv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 4 16:05:13.971: INFO: stderr: "" May 4 16:05:13.971: INFO: stdout: "" May 4 16:05:13.971: INFO: update-demo-nautilus-gtkhv is created but not running May 4 16:05:18.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3645 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 4 16:05:19.126: INFO: stderr: "" May 4 16:05:19.126: INFO: stdout: "update-demo-nautilus-gtkhv update-demo-nautilus-rt5pl " May 4 16:05:19.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3645 get pods update-demo-nautilus-gtkhv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 4 16:05:19.281: INFO: stderr: "" May 4 16:05:19.281: INFO: stdout: "" May 4 16:05:19.281: INFO: update-demo-nautilus-gtkhv is created but not running May 4 16:05:24.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3645 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 4 16:05:24.451: INFO: stderr: "" May 4 16:05:24.451: INFO: stdout: "update-demo-nautilus-gtkhv update-demo-nautilus-rt5pl " May 4 16:05:24.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3645 get pods update-demo-nautilus-gtkhv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 4 16:05:24.603: INFO: stderr: "" May 4 16:05:24.603: INFO: stdout: "true" May 4 16:05:24.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3645 get pods update-demo-nautilus-gtkhv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 4 16:05:24.763: INFO: stderr: "" May 4 16:05:24.763: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 4 16:05:24.763: INFO: validating pod update-demo-nautilus-gtkhv May 4 16:05:24.767: INFO: got data: { "image": "nautilus.jpg" } May 4 16:05:24.767: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 4 16:05:24.767: INFO: update-demo-nautilus-gtkhv is verified up and running May 4 16:05:24.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3645 get pods update-demo-nautilus-rt5pl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 4 16:05:24.927: INFO: stderr: "" May 4 16:05:24.927: INFO: stdout: "true" May 4 16:05:24.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3645 get pods update-demo-nautilus-rt5pl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 4 16:05:25.105: INFO: stderr: "" May 4 16:05:25.105: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 4 16:05:25.105: INFO: validating pod update-demo-nautilus-rt5pl May 4 16:05:25.109: INFO: got data: { "image": "nautilus.jpg" } May 4 16:05:25.109: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 4 16:05:25.109: INFO: update-demo-nautilus-rt5pl is verified up and running STEP: using delete to clean up resources May 4 16:05:25.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3645 delete --grace-period=0 --force -f -' May 4 16:05:25.238: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 4 16:05:25.238: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 4 16:05:25.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3645 get rc,svc -l name=update-demo --no-headers' May 4 16:05:25.430: INFO: stderr: "No resources found in kubectl-3645 namespace.\n" May 4 16:05:25.430: INFO: stdout: "" May 4 16:05:25.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3645 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 4 16:05:25.599: INFO: stderr: "" May 4 16:05:25.599: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:05:25.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3645" for this suite. • [SLOW TEST:12.293 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:297 should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":-1,"completed":15,"skipped":190,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:05:21.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium May 4 16:05:21.860: INFO: Waiting up to 5m0s for pod "pod-93c47095-28ae-42ca-bdaf-e189e70d6708" in namespace "emptydir-7281" to be "Succeeded or Failed" May 4 16:05:21.863: INFO: Pod "pod-93c47095-28ae-42ca-bdaf-e189e70d6708": Phase="Pending", Reason="", readiness=false. Elapsed: 2.375574ms May 4 16:05:23.865: INFO: Pod "pod-93c47095-28ae-42ca-bdaf-e189e70d6708": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005133982s May 4 16:05:25.868: INFO: Pod "pod-93c47095-28ae-42ca-bdaf-e189e70d6708": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00766833s May 4 16:05:27.873: INFO: Pod "pod-93c47095-28ae-42ca-bdaf-e189e70d6708": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012773453s STEP: Saw pod success May 4 16:05:27.873: INFO: Pod "pod-93c47095-28ae-42ca-bdaf-e189e70d6708" satisfied condition "Succeeded or Failed" May 4 16:05:27.876: INFO: Trying to get logs from node node1 pod pod-93c47095-28ae-42ca-bdaf-e189e70d6708 container test-container: STEP: delete the pod May 4 16:05:27.891: INFO: Waiting for pod pod-93c47095-28ae-42ca-bdaf-e189e70d6708 to disappear May 4 16:05:27.893: INFO: Pod pod-93c47095-28ae-42ca-bdaf-e189e70d6708 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:05:27.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7281" for this suite. • [SLOW TEST:6.070 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":134,"failed":0} SSSS ------------------------------ [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:05:22.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath May 4 16:05:22.859: INFO: Waiting up to 5m0s for pod "var-expansion-90e514e0-901c-4932-8756-68e0c64fed1a" in namespace "var-expansion-5607" to be "Succeeded or Failed" May 4 16:05:22.860: INFO: Pod "var-expansion-90e514e0-901c-4932-8756-68e0c64fed1a": Phase="Pending", Reason="", readiness=false. Elapsed: 1.864402ms May 4 16:05:24.864: INFO: Pod "var-expansion-90e514e0-901c-4932-8756-68e0c64fed1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005638005s May 4 16:05:26.869: INFO: Pod "var-expansion-90e514e0-901c-4932-8756-68e0c64fed1a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010023702s May 4 16:05:28.873: INFO: Pod "var-expansion-90e514e0-901c-4932-8756-68e0c64fed1a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014134488s May 4 16:05:30.876: INFO: Pod "var-expansion-90e514e0-901c-4932-8756-68e0c64fed1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.016946079s STEP: Saw pod success May 4 16:05:30.876: INFO: Pod "var-expansion-90e514e0-901c-4932-8756-68e0c64fed1a" satisfied condition "Succeeded or Failed" May 4 16:05:30.878: INFO: Trying to get logs from node node1 pod var-expansion-90e514e0-901c-4932-8756-68e0c64fed1a container dapi-container: STEP: delete the pod May 4 16:05:30.889: INFO: Waiting for pod var-expansion-90e514e0-901c-4932-8756-68e0c64fed1a to disappear May 4 16:05:30.891: INFO: Pod var-expansion-90e514e0-901c-4932-8756-68e0c64fed1a no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:05:30.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5607" for this suite. • [SLOW TEST:8.074 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":-1,"completed":9,"skipped":156,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:05:27.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-6f6fb990-2b1c-4593-ac04-e20883fc4530 STEP: Creating a pod to test consume configMaps May 4 16:05:27.948: INFO: Waiting up to 5m0s for pod "pod-configmaps-bd5d6c95-34e1-46ed-a11f-f4f7d8041b6c" in namespace "configmap-5655" to be "Succeeded or Failed" May 4 16:05:27.950: INFO: Pod "pod-configmaps-bd5d6c95-34e1-46ed-a11f-f4f7d8041b6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.621194ms May 4 16:05:29.954: INFO: Pod "pod-configmaps-bd5d6c95-34e1-46ed-a11f-f4f7d8041b6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005875809s May 4 16:05:31.957: INFO: Pod "pod-configmaps-bd5d6c95-34e1-46ed-a11f-f4f7d8041b6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008789905s STEP: Saw pod success May 4 16:05:31.957: INFO: Pod "pod-configmaps-bd5d6c95-34e1-46ed-a11f-f4f7d8041b6c" satisfied condition "Succeeded or Failed" May 4 16:05:31.959: INFO: Trying to get logs from node node1 pod pod-configmaps-bd5d6c95-34e1-46ed-a11f-f4f7d8041b6c container configmap-volume-test: STEP: delete the pod May 4 16:05:31.972: INFO: Waiting for pod pod-configmaps-bd5d6c95-34e1-46ed-a11f-f4f7d8041b6c to disappear May 4 16:05:31.974: INFO: Pod pod-configmaps-bd5d6c95-34e1-46ed-a11f-f4f7d8041b6c no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:05:31.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5655" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":138,"failed":0} SSSS ------------------------------ [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:05:20.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 4 16:05:21.008: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:05:34.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3801" for this suite. • [SLOW TEST:13.316 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":10,"skipped":303,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:05:30.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-901d9f64-1814-4fcc-a1d8-2c193e5ab1c8 STEP: Creating a pod to test consume secrets May 4 16:05:30.953: INFO: Waiting up to 5m0s for pod "pod-secrets-410b547a-52c4-4f27-a98d-632f9738a477" in namespace "secrets-1216" to be "Succeeded or Failed" May 4 16:05:30.958: INFO: Pod "pod-secrets-410b547a-52c4-4f27-a98d-632f9738a477": Phase="Pending", Reason="", readiness=false. Elapsed: 4.437091ms May 4 16:05:32.961: INFO: Pod "pod-secrets-410b547a-52c4-4f27-a98d-632f9738a477": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00725029s May 4 16:05:34.967: INFO: Pod "pod-secrets-410b547a-52c4-4f27-a98d-632f9738a477": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013256782s STEP: Saw pod success May 4 16:05:34.967: INFO: Pod "pod-secrets-410b547a-52c4-4f27-a98d-632f9738a477" satisfied condition "Succeeded or Failed" May 4 16:05:34.969: INFO: Trying to get logs from node node2 pod pod-secrets-410b547a-52c4-4f27-a98d-632f9738a477 container secret-env-test: STEP: delete the pod May 4 16:05:34.981: INFO: Waiting for pod pod-secrets-410b547a-52c4-4f27-a98d-632f9738a477 to disappear May 4 16:05:34.982: INFO: Pod pod-secrets-410b547a-52c4-4f27-a98d-632f9738a477 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:05:34.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1216" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":163,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:05:31.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command May 4 16:05:32.022: INFO: Waiting up to 5m0s for pod "var-expansion-2fc2ce4b-c48b-4587-968d-b6f7b275761f" in namespace "var-expansion-2597" to be "Succeeded or Failed" May 4 16:05:32.024: INFO: Pod "var-expansion-2fc2ce4b-c48b-4587-968d-b6f7b275761f": Phase="Pending", Reason="", readiness=false. Elapsed: 1.840567ms May 4 16:05:34.028: INFO: Pod "var-expansion-2fc2ce4b-c48b-4587-968d-b6f7b275761f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005167899s May 4 16:05:36.030: INFO: Pod "var-expansion-2fc2ce4b-c48b-4587-968d-b6f7b275761f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007875535s May 4 16:05:38.033: INFO: Pod "var-expansion-2fc2ce4b-c48b-4587-968d-b6f7b275761f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.011112403s STEP: Saw pod success May 4 16:05:38.034: INFO: Pod "var-expansion-2fc2ce4b-c48b-4587-968d-b6f7b275761f" satisfied condition "Succeeded or Failed" May 4 16:05:38.036: INFO: Trying to get logs from node node1 pod var-expansion-2fc2ce4b-c48b-4587-968d-b6f7b275761f container dapi-container: STEP: delete the pod May 4 16:05:38.050: INFO: Waiting for pod var-expansion-2fc2ce4b-c48b-4587-968d-b6f7b275761f to disappear May 4 16:05:38.052: INFO: Pod var-expansion-2fc2ce4b-c48b-4587-968d-b6f7b275761f no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:05:38.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2597" for this suite. • [SLOW TEST:6.067 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":142,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:05:04.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-2098 STEP: creating service affinity-clusterip in namespace services-2098 STEP: creating replication controller affinity-clusterip in namespace services-2098 I0504 16:05:04.796123 29 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-2098, replica count: 3 I0504 16:05:07.846660 29 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0504 16:05:10.846775 29 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0504 16:05:13.846929 29 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 4 16:05:13.853: INFO: Creating new exec pod May 4 16:05:20.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2098 exec execpod-affinity7wfdz -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' May 4 16:05:21.155: INFO: stderr: "+ nc -zv -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" May 4 16:05:21.155: INFO: stdout: "" May 4 16:05:21.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2098 exec execpod-affinity7wfdz -- /bin/sh -x -c nc -zv -t -w 2 10.233.54.130 80' May 4 16:05:21.422: INFO: stderr: "+ nc -zv -t -w 2 10.233.54.130 80\nConnection to 10.233.54.130 80 port [tcp/http] succeeded!\n" May 4 16:05:21.422: INFO: stdout: "" May 4 16:05:21.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2098 exec execpod-affinity7wfdz -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.54.130:80/ ; done' May 4 16:05:21.743: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.54.130:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.54.130:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.54.130:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.54.130:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.54.130:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.54.130:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.54.130:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.54.130:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.54.130:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.54.130:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.54.130:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.54.130:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.54.130:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.54.130:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.54.130:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.54.130:80/\n" May 4 16:05:21.743: INFO: stdout: "\naffinity-clusterip-c5ms2\naffinity-clusterip-c5ms2\naffinity-clusterip-c5ms2\naffinity-clusterip-c5ms2\naffinity-clusterip-c5ms2\naffinity-clusterip-c5ms2\naffinity-clusterip-c5ms2\naffinity-clusterip-c5ms2\naffinity-clusterip-c5ms2\naffinity-clusterip-c5ms2\naffinity-clusterip-c5ms2\naffinity-clusterip-c5ms2\naffinity-clusterip-c5ms2\naffinity-clusterip-c5ms2\naffinity-clusterip-c5ms2\naffinity-clusterip-c5ms2" May 4 16:05:21.743: INFO: Received response from host: affinity-clusterip-c5ms2 May 4 16:05:21.743: INFO: Received response from host: affinity-clusterip-c5ms2 May 4 16:05:21.743: INFO: Received response from host: affinity-clusterip-c5ms2 May 4 16:05:21.743: INFO: Received response from host: affinity-clusterip-c5ms2 May 4 16:05:21.743: INFO: Received response from host: affinity-clusterip-c5ms2 May 4 16:05:21.743: INFO: Received response from host: affinity-clusterip-c5ms2 May 4 16:05:21.743: INFO: Received response from host: affinity-clusterip-c5ms2 May 4 16:05:21.743: INFO: Received response from host: affinity-clusterip-c5ms2 May 4 16:05:21.743: INFO: Received response from host: affinity-clusterip-c5ms2 May 4 16:05:21.743: INFO: Received response from host: affinity-clusterip-c5ms2 May 4 16:05:21.743: INFO: Received response from host: affinity-clusterip-c5ms2 May 4 16:05:21.743: INFO: Received response from host: affinity-clusterip-c5ms2 May 4 16:05:21.743: INFO: Received response from host: affinity-clusterip-c5ms2 May 4 16:05:21.743: INFO: Received response from host: affinity-clusterip-c5ms2 May 4 16:05:21.743: INFO: Received response from host: affinity-clusterip-c5ms2 May 4 16:05:21.743: INFO: Received response from host: affinity-clusterip-c5ms2 May 4 16:05:21.743: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-2098, will wait for the garbage collector to delete the pods May 4 16:05:21.807: INFO: Deleting ReplicationController affinity-clusterip took: 4.142343ms May 4 16:05:21.907: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.322731ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:05:40.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2098" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:35.367 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":8,"skipped":165,"failed":0} SS ------------------------------ [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:05:34.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args May 4 16:05:34.448: INFO: Waiting up to 5m0s for pod "var-expansion-0dc238b3-881e-48ba-a3aa-c7b2e317777f" in namespace "var-expansion-965" to be "Succeeded or Failed" May 4 16:05:34.450: INFO: Pod "var-expansion-0dc238b3-881e-48ba-a3aa-c7b2e317777f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055375ms May 4 16:05:36.454: INFO: Pod "var-expansion-0dc238b3-881e-48ba-a3aa-c7b2e317777f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005161861s May 4 16:05:38.456: INFO: Pod "var-expansion-0dc238b3-881e-48ba-a3aa-c7b2e317777f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00789403s May 4 16:05:40.459: INFO: Pod "var-expansion-0dc238b3-881e-48ba-a3aa-c7b2e317777f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.010884894s STEP: Saw pod success May 4 16:05:40.459: INFO: Pod "var-expansion-0dc238b3-881e-48ba-a3aa-c7b2e317777f" satisfied condition "Succeeded or Failed" May 4 16:05:40.462: INFO: Trying to get logs from node node2 pod var-expansion-0dc238b3-881e-48ba-a3aa-c7b2e317777f container dapi-container: STEP: delete the pod May 4 16:05:40.473: INFO: Waiting for pod var-expansion-0dc238b3-881e-48ba-a3aa-c7b2e317777f to disappear May 4 16:05:40.475: INFO: Pod var-expansion-0dc238b3-881e-48ba-a3aa-c7b2e317777f no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:05:40.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-965" for this suite. • [SLOW TEST:6.065 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":354,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:05:35.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 4 16:05:35.464: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 4 16:05:37.474: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741135, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741135, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741135, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741135, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 4 16:05:40.484: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:05:40.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7422" for this suite. STEP: Destroying namespace "webhook-7422-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.527 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":11,"skipped":213,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:05:40.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 4 16:05:40.170: INFO: Waiting up to 5m0s for pod "downwardapi-volume-981863b5-0750-4007-a705-a8e4b8bb27c7" in namespace "projected-1232" to be "Succeeded or Failed" May 4 16:05:40.172: INFO: Pod "downwardapi-volume-981863b5-0750-4007-a705-a8e4b8bb27c7": Phase="Pending", Reason="", readiness=false. Elapsed: 1.690578ms May 4 16:05:42.175: INFO: Pod "downwardapi-volume-981863b5-0750-4007-a705-a8e4b8bb27c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004506314s May 4 16:05:44.178: INFO: Pod "downwardapi-volume-981863b5-0750-4007-a705-a8e4b8bb27c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007531748s STEP: Saw pod success May 4 16:05:44.178: INFO: Pod "downwardapi-volume-981863b5-0750-4007-a705-a8e4b8bb27c7" satisfied condition "Succeeded or Failed" May 4 16:05:44.180: INFO: Trying to get logs from node node2 pod downwardapi-volume-981863b5-0750-4007-a705-a8e4b8bb27c7 container client-container: STEP: delete the pod May 4 16:05:44.193: INFO: Waiting for pod downwardapi-volume-981863b5-0750-4007-a705-a8e4b8bb27c7 to disappear May 4 16:05:44.195: INFO: Pod downwardapi-volume-981863b5-0750-4007-a705-a8e4b8bb27c7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:05:44.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1232" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":167,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:04:05.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-5682 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-5682 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5682 May 4 16:04:05.903: INFO: Found 0 stateful pods, waiting for 1 May 4 16:04:15.906: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 4 16:04:15.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5682 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 4 16:04:16.215: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 4 16:04:16.215: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 4 16:04:16.215: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 4 16:04:16.218: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 4 16:04:26.220: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 4 16:04:26.220: INFO: Waiting for statefulset status.replicas updated to 0 May 4 16:04:26.230: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999525s May 4 16:04:27.235: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.99737239s May 4 16:04:28.238: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.992493274s May 4 16:04:29.242: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.989224042s May 4 16:04:30.247: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.985528088s May 4 16:04:31.251: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.980480049s May 4 16:04:32.258: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.9770515s May 4 16:04:33.262: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.969199147s May 4 16:04:34.265: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.96544598s May 4 16:04:35.269: INFO: Verifying statefulset ss doesn't scale past 1 for another 961.430657ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5682 May 4 16:04:36.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5682 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 16:04:36.545: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 4 16:04:36.546: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 4 16:04:36.546: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 4 16:04:36.549: INFO: Found 1 stateful pods, waiting for 3 May 4 16:04:46.552: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 4 16:04:46.552: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 4 16:04:46.553: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:04:56.552: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 4 16:04:56.552: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 4 16:04:56.552: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 4 16:04:56.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5682 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 4 16:04:56.806: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 4 16:04:56.806: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 4 16:04:56.806: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 4 16:04:56.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5682 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 4 16:04:57.063: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 4 16:04:57.063: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 4 16:04:57.063: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 4 16:04:57.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5682 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 4 16:04:57.412: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 4 16:04:57.412: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 4 16:04:57.412: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 4 16:04:57.412: INFO: Waiting for statefulset status.replicas updated to 0 May 4 16:04:57.415: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 4 16:05:07.421: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 4 16:05:07.421: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 4 16:05:07.421: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 4 16:05:07.429: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999507s May 4 16:05:08.432: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.997256763s May 4 16:05:09.439: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.993848199s May 4 16:05:10.443: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.988181891s May 4 16:05:11.447: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.983045272s May 4 16:05:12.450: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.979459692s May 4 16:05:13.453: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.976251097s May 4 16:05:14.458: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.972797746s May 4 16:05:15.462: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.968367257s May 4 16:05:16.466: INFO: Verifying statefulset ss doesn't scale past 3 for another 964.493782ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5682 May 4 16:05:17.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5682 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 16:05:17.736: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 4 16:05:17.736: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 4 16:05:17.736: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 4 16:05:17.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5682 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 16:05:18.073: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 4 16:05:18.073: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 4 16:05:18.073: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 4 16:05:18.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5682 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 16:05:18.342: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 4 16:05:18.343: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 4 16:05:18.343: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 4 16:05:18.343: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 4 16:05:48.353: INFO: Deleting all statefulset in ns statefulset-5682 May 4 16:05:48.361: INFO: Scaling statefulset ss to 0 May 4 16:05:48.371: INFO: Waiting for statefulset status.replicas updated to 0 May 4 16:05:48.373: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:05:48.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5682" for this suite. • [SLOW TEST:102.522 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":-1,"completed":4,"skipped":34,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:05:25.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 4 16:05:25.695: INFO: >>> kubeConfig: /root/.kube/config May 4 16:05:33.610: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:05:51.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9297" for this suite. • [SLOW TEST:25.584 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":16,"skipped":219,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:05:40.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:05:51.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2072" for this suite. • [SLOW TEST:11.060 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":12,"skipped":284,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:05:48.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 4 16:05:48.434: INFO: Waiting up to 5m0s for pod "downward-api-cb0cb730-de52-4299-8fa1-367160460e8b" in namespace "downward-api-261" to be "Succeeded or Failed" May 4 16:05:48.437: INFO: Pod "downward-api-cb0cb730-de52-4299-8fa1-367160460e8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.950008ms May 4 16:05:50.440: INFO: Pod "downward-api-cb0cb730-de52-4299-8fa1-367160460e8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005711953s May 4 16:05:52.443: INFO: Pod "downward-api-cb0cb730-de52-4299-8fa1-367160460e8b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008461052s May 4 16:05:54.446: INFO: Pod "downward-api-cb0cb730-de52-4299-8fa1-367160460e8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.011503051s STEP: Saw pod success May 4 16:05:54.446: INFO: Pod "downward-api-cb0cb730-de52-4299-8fa1-367160460e8b" satisfied condition "Succeeded or Failed" May 4 16:05:54.448: INFO: Trying to get logs from node node1 pod downward-api-cb0cb730-de52-4299-8fa1-367160460e8b container dapi-container: STEP: delete the pod May 4 16:05:54.624: INFO: Waiting for pod downward-api-cb0cb730-de52-4299-8fa1-367160460e8b to disappear May 4 16:05:54.626: INFO: Pod downward-api-cb0cb730-de52-4299-8fa1-367160460e8b no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:05:54.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-261" for this suite. • [SLOW TEST:6.230 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":37,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:05:40.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-610.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-610.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-610.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-610.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-610.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-610.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-610.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-610.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-610.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-610.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-610.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-610.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-610.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 78.43.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.43.78_udp@PTR;check="$$(dig +tcp +noall +answer +search 78.43.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.43.78_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-610.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-610.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-610.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-610.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-610.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-610.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-610.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-610.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-610.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-610.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-610.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-610.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-610.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 78.43.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.43.78_udp@PTR;check="$$(dig +tcp +noall +answer +search 78.43.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.43.78_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 4 16:05:50.554: INFO: Unable to read wheezy_udp@dns-test-service.dns-610.svc.cluster.local from pod dns-610/dns-test-ed2dd561-bf3e-4744-9e85-6ec16072a24c: the server could not find the requested resource (get pods dns-test-ed2dd561-bf3e-4744-9e85-6ec16072a24c) May 4 16:05:50.560: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-610.svc.cluster.local from pod dns-610/dns-test-ed2dd561-bf3e-4744-9e85-6ec16072a24c: the server could not find the requested resource (get pods dns-test-ed2dd561-bf3e-4744-9e85-6ec16072a24c) May 4 16:05:50.563: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-610.svc.cluster.local from pod dns-610/dns-test-ed2dd561-bf3e-4744-9e85-6ec16072a24c: the server could not find the requested resource (get pods dns-test-ed2dd561-bf3e-4744-9e85-6ec16072a24c) May 4 16:05:50.580: INFO: Unable to read jessie_udp@dns-test-service.dns-610.svc.cluster.local from pod dns-610/dns-test-ed2dd561-bf3e-4744-9e85-6ec16072a24c: the server could not find the requested resource (get pods dns-test-ed2dd561-bf3e-4744-9e85-6ec16072a24c) May 4 16:05:50.584: INFO: Unable to read jessie_tcp@dns-test-service.dns-610.svc.cluster.local from pod dns-610/dns-test-ed2dd561-bf3e-4744-9e85-6ec16072a24c: the server could not find the requested resource (get pods dns-test-ed2dd561-bf3e-4744-9e85-6ec16072a24c) May 4 16:05:50.586: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-610.svc.cluster.local from pod dns-610/dns-test-ed2dd561-bf3e-4744-9e85-6ec16072a24c: the server could not find the requested resource (get pods dns-test-ed2dd561-bf3e-4744-9e85-6ec16072a24c) May 4 16:05:50.588: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-610.svc.cluster.local from pod dns-610/dns-test-ed2dd561-bf3e-4744-9e85-6ec16072a24c: the server could not find the requested resource (get pods dns-test-ed2dd561-bf3e-4744-9e85-6ec16072a24c) May 4 16:05:50.603: INFO: Lookups using dns-610/dns-test-ed2dd561-bf3e-4744-9e85-6ec16072a24c failed for: [wheezy_udp@dns-test-service.dns-610.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-610.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-610.svc.cluster.local jessie_udp@dns-test-service.dns-610.svc.cluster.local jessie_tcp@dns-test-service.dns-610.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-610.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-610.svc.cluster.local] May 4 16:05:55.657: INFO: DNS probes using dns-610/dns-test-ed2dd561-bf3e-4744-9e85-6ec16072a24c succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:05:55.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-610" for this suite. • [SLOW TEST:15.189 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":-1,"completed":12,"skipped":360,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:05:55.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:05:55.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5229" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":13,"skipped":367,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:05:51.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 4 16:05:56.316: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:05:56.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7479" for this suite. • [SLOW TEST:5.069 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":220,"failed":0} SSSS ------------------------------ [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:05:56.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:05:56.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-936" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":224,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:05:56.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:05:56.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8316" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • ------------------------------ {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":19,"skipped":230,"failed":0} SSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:05:51.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:05:51.878: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-67ee8a4f-79d9-473d-9214-3c4207dfb620" in namespace "security-context-test-9176" to be "Succeeded or Failed" May 4 16:05:51.883: INFO: Pod "busybox-privileged-false-67ee8a4f-79d9-473d-9214-3c4207dfb620": Phase="Pending", Reason="", readiness=false. Elapsed: 5.332218ms May 4 16:05:53.886: INFO: Pod "busybox-privileged-false-67ee8a4f-79d9-473d-9214-3c4207dfb620": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008526075s May 4 16:05:55.889: INFO: Pod "busybox-privileged-false-67ee8a4f-79d9-473d-9214-3c4207dfb620": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011018434s May 4 16:05:57.892: INFO: Pod "busybox-privileged-false-67ee8a4f-79d9-473d-9214-3c4207dfb620": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014008873s May 4 16:05:57.892: INFO: Pod "busybox-privileged-false-67ee8a4f-79d9-473d-9214-3c4207dfb620" satisfied condition "Succeeded or Failed" May 4 16:05:57.914: INFO: Got logs for pod "busybox-privileged-false-67ee8a4f-79d9-473d-9214-3c4207dfb620": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:05:57.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9176" for this suite. • [SLOW TEST:6.077 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a pod with privileged /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":293,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:05:08.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 4 16:05:08.459: INFO: PodSpec: initContainers in spec.initContainers May 4 16:06:01.160: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-0ab7832e-e44c-4c51-9e23-27d65d75897a", GenerateName:"", Namespace:"init-container-2066", SelfLink:"/api/v1/namespaces/init-container-2066/pods/pod-init-0ab7832e-e44c-4c51-9e23-27d65d75897a", UID:"8f292235-ad0d-4948-8be8-16789bde2505", ResourceVersion:"25884", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63755741108, loc:(*time.Location)(0x770c940)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"459872250"}, Annotations:map[string]string{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.88\"\n ],\n \"mac\": \"96:74:e7:ed:ea:0a\",\n \"default\": true,\n \"dns\": {}\n}]", "k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.88\"\n ],\n \"mac\": \"96:74:e7:ed:ea:0a\",\n \"default\": true,\n \"dns\": {}\n}]", "kubernetes.io/psp":"collectd"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0041c6040), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0041c6060)}, v1.ManagedFieldsEntry{Manager:"multus", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0041c6080), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0041c60a0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0041c60c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0041c60e0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-jrl5j", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc008b1a000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jrl5j", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jrl5j", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jrl5j", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0055e40b0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"node2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc004336000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0055e4130)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0055e4150)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0055e4158), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0055e415c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc00452e030), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741108, loc:(*time.Location)(0x770c940)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741108, loc:(*time.Location)(0x770c940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741108, loc:(*time.Location)(0x770c940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741108, loc:(*time.Location)(0x770c940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.10.190.208", PodIP:"10.244.3.88", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.3.88"}}, StartTime:(*v1.Time)(0xc0041c6100), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0043360e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc004336150)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://1f2939352c5834462e90c6e0b72ddd9e9963e84a349b48e2fe6de83d12de747b", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0041c6140), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0041c6120), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc0055e41df)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:06:01.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2066" for this suite. • [SLOW TEST:52.728 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":14,"skipped":229,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:05:54.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 4 16:05:55.166: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 4 16:05:57.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741155, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741155, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741155, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741155, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 16:05:59.177: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741155, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741155, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741155, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741155, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 16:06:01.176: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741155, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741155, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741155, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741155, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 4 16:06:04.182: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:06:04.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3655" for this suite. STEP: Destroying namespace "webhook-3655-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.579 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":6,"skipped":62,"failed":0} S ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:05:44.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:05:44.262: INFO: The status of Pod test-webserver-d72af79a-15cf-49f5-b2b8-05e3f178bcc6 is Pending, waiting for it to be Running (with Ready = true) May 4 16:05:46.265: INFO: The status of Pod test-webserver-d72af79a-15cf-49f5-b2b8-05e3f178bcc6 is Pending, waiting for it to be Running (with Ready = true) May 4 16:05:48.266: INFO: The status of Pod test-webserver-d72af79a-15cf-49f5-b2b8-05e3f178bcc6 is Running (Ready = false) May 4 16:05:50.265: INFO: The status of Pod test-webserver-d72af79a-15cf-49f5-b2b8-05e3f178bcc6 is Running (Ready = false) May 4 16:05:52.268: INFO: The status of Pod test-webserver-d72af79a-15cf-49f5-b2b8-05e3f178bcc6 is Running (Ready = false) May 4 16:05:54.267: INFO: The status of Pod test-webserver-d72af79a-15cf-49f5-b2b8-05e3f178bcc6 is Running (Ready = false) May 4 16:05:56.265: INFO: The status of Pod test-webserver-d72af79a-15cf-49f5-b2b8-05e3f178bcc6 is Running (Ready = false) May 4 16:05:58.266: INFO: The status of Pod test-webserver-d72af79a-15cf-49f5-b2b8-05e3f178bcc6 is Running (Ready = false) May 4 16:06:00.265: INFO: The status of Pod test-webserver-d72af79a-15cf-49f5-b2b8-05e3f178bcc6 is Running (Ready = false) May 4 16:06:02.265: INFO: The status of Pod test-webserver-d72af79a-15cf-49f5-b2b8-05e3f178bcc6 is Running (Ready = false) May 4 16:06:04.266: INFO: The status of Pod test-webserver-d72af79a-15cf-49f5-b2b8-05e3f178bcc6 is Running (Ready = false) May 4 16:06:06.266: INFO: The status of Pod test-webserver-d72af79a-15cf-49f5-b2b8-05e3f178bcc6 is Running (Ready = true) May 4 16:06:06.268: INFO: Container started at 2021-05-04 16:05:46 +0000 UTC, pod became ready at 2021-05-04 16:06:04 +0000 UTC [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:06:06.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5303" for this suite. • [SLOW TEST:22.047 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":177,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:05:57.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-6e8369a7-6c67-4420-8fb8-640bd685d849 STEP: Creating a pod to test consume secrets May 4 16:05:57.999: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bac81782-1af2-4715-949a-e941989a17a8" in namespace "projected-6305" to be "Succeeded or Failed" May 4 16:05:58.005: INFO: Pod "pod-projected-secrets-bac81782-1af2-4715-949a-e941989a17a8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.621025ms May 4 16:06:00.009: INFO: Pod "pod-projected-secrets-bac81782-1af2-4715-949a-e941989a17a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009826565s May 4 16:06:02.013: INFO: Pod "pod-projected-secrets-bac81782-1af2-4715-949a-e941989a17a8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013314485s May 4 16:06:04.016: INFO: Pod "pod-projected-secrets-bac81782-1af2-4715-949a-e941989a17a8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016815327s May 4 16:06:06.019: INFO: Pod "pod-projected-secrets-bac81782-1af2-4715-949a-e941989a17a8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019881823s May 4 16:06:08.022: INFO: Pod "pod-projected-secrets-bac81782-1af2-4715-949a-e941989a17a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.022444865s STEP: Saw pod success May 4 16:06:08.022: INFO: Pod "pod-projected-secrets-bac81782-1af2-4715-949a-e941989a17a8" satisfied condition "Succeeded or Failed" May 4 16:06:08.024: INFO: Trying to get logs from node node2 pod pod-projected-secrets-bac81782-1af2-4715-949a-e941989a17a8 container projected-secret-volume-test: STEP: delete the pod May 4 16:06:08.038: INFO: Waiting for pod pod-projected-secrets-bac81782-1af2-4715-949a-e941989a17a8 to disappear May 4 16:06:08.040: INFO: Pod pod-projected-secrets-bac81782-1af2-4715-949a-e941989a17a8 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:06:08.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6305" for this suite. • [SLOW TEST:10.089 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":307,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:05:07.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:05:07.744: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:06:08.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5950" for this suite. • [SLOW TEST:61.275 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":-1,"completed":5,"skipped":106,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:06:01.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:06:01.194: INFO: Creating deployment "webserver-deployment" May 4 16:06:01.198: INFO: Waiting for observed generation 1 May 4 16:06:03.203: INFO: Waiting for all required pods to come up May 4 16:06:03.207: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 4 16:06:15.212: INFO: Waiting for deployment "webserver-deployment" to complete May 4 16:06:15.216: INFO: Updating deployment "webserver-deployment" with a non-existent image May 4 16:06:15.222: INFO: Updating deployment webserver-deployment May 4 16:06:15.222: INFO: Waiting for observed generation 2 May 4 16:06:17.226: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 4 16:06:17.228: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 4 16:06:17.230: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 4 16:06:17.237: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 4 16:06:17.237: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 4 16:06:17.240: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 4 16:06:17.243: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 4 16:06:17.243: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 4 16:06:17.250: INFO: Updating deployment webserver-deployment May 4 16:06:17.250: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 4 16:06:17.254: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 4 16:06:17.255: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 May 4 16:06:17.261: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-8071 /apis/apps/v1/namespaces/deployment-8071/deployments/webserver-deployment 4f9120aa-c317-4756-92a5-e4dce496a471 26423 3 2021-05-04 16:06:01 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-05-04 16:06:01 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-05-04 16:06:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc007849298 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-05-04 16:06:12 +0000 UTC,LastTransitionTime:2021-05-04 16:06:12 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2021-05-04 16:06:15 +0000 UTC,LastTransitionTime:2021-05-04 16:06:01 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 4 16:06:17.264: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-8071 /apis/apps/v1/namespaces/deployment-8071/replicasets/webserver-deployment-795d758f88 b16796fd-3432-4964-909c-8889bbb52c89 26426 3 2021-05-04 16:06:15 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 4f9120aa-c317-4756-92a5-e4dce496a471 0xc003807cf7 0xc003807cf8}] [] [{kube-controller-manager Update apps/v1 2021-05-04 16:06:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4f9120aa-c317-4756-92a5-e4dce496a471\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003807d78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 4 16:06:17.264: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 4 16:06:17.264: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-dd94f59b7 deployment-8071 /apis/apps/v1/namespaces/deployment-8071/replicasets/webserver-deployment-dd94f59b7 67ae3283-2970-4083-a391-be290c38a85c 26424 3 2021-05-04 16:06:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 4f9120aa-c317-4756-92a5-e4dce496a471 0xc003807dd7 0xc003807dd8}] [] [{kube-controller-manager Update apps/v1 2021-05-04 16:06:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4f9120aa-c317-4756-92a5-e4dce496a471\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: dd94f59b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003807e48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 4 16:06:17.269: INFO: Pod "webserver-deployment-795d758f88-4lvtq" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-4lvtq webserver-deployment-795d758f88- deployment-8071 /api/v1/namespaces/deployment-8071/pods/webserver-deployment-795d758f88-4lvtq c229992d-3a3c-42e1-82b6-4f57e21fc554 26380 0 2021-05-04 16:06:15 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 b16796fd-3432-4964-909c-8889bbb52c89 0xc000acb96f 0xc000acb9a0}] [] [{kube-controller-manager Update v1 2021-05-04 16:06:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b16796fd-3432-4964-909c-8889bbb52c89\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-05-04 16:06:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rpd4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rpd4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rpd4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2021-05-04 16:06:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 4 16:06:17.269: INFO: Pod "webserver-deployment-795d758f88-7znn8" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-7znn8 webserver-deployment-795d758f88- deployment-8071 /api/v1/namespaces/deployment-8071/pods/webserver-deployment-795d758f88-7znn8 2b7e4358-5a0a-4a6c-9ca5-2777e7bdd337 26392 0 2021-05-04 16:06:15 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 b16796fd-3432-4964-909c-8889bbb52c89 0xc000acbc9f 0xc000acbcb0}] [] [{kube-controller-manager Update v1 2021-05-04 16:06:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b16796fd-3432-4964-909c-8889bbb52c89\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-05-04 16:06:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rpd4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rpd4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rpd4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2021-05-04 16:06:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 4 16:06:17.269: INFO: Pod "webserver-deployment-795d758f88-dx78x" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-dx78x webserver-deployment-795d758f88- deployment-8071 /api/v1/namespaces/deployment-8071/pods/webserver-deployment-795d758f88-dx78x 8a5c8eac-e85f-4cc4-91ee-f99331add6f0 26434 0 2021-05-04 16:06:17 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 b16796fd-3432-4964-909c-8889bbb52c89 0xc000acbe3f 0xc000acbe50}] [] [{kube-controller-manager Update v1 2021-05-04 16:06:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b16796fd-3432-4964-909c-8889bbb52c89\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rpd4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rpd4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rpd4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 4 16:06:17.270: INFO: Pod "webserver-deployment-795d758f88-h99dz" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-h99dz webserver-deployment-795d758f88- deployment-8071 /api/v1/namespaces/deployment-8071/pods/webserver-deployment-795d758f88-h99dz b86f44eb-a723-467f-83bf-8b89e87642dd 26403 0 2021-05-04 16:06:15 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 b16796fd-3432-4964-909c-8889bbb52c89 0xc000acbfaf 0xc000acbfc0}] [] [{kube-controller-manager Update v1 2021-05-04 16:06:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b16796fd-3432-4964-909c-8889bbb52c89\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-05-04 16:06:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rpd4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rpd4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rpd4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2021-05-04 16:06:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 4 16:06:17.270: INFO: Pod "webserver-deployment-795d758f88-kjcbp" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-kjcbp webserver-deployment-795d758f88- deployment-8071 /api/v1/namespaces/deployment-8071/pods/webserver-deployment-795d758f88-kjcbp b7ee6188-4b7d-403a-aa29-a2f1b8c49b40 26375 0 2021-05-04 16:06:15 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 b16796fd-3432-4964-909c-8889bbb52c89 0xc00312006f 0xc0001d9e20}] [] [{kube-controller-manager Update v1 2021-05-04 16:06:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b16796fd-3432-4964-909c-8889bbb52c89\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-05-04 16:06:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rpd4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rpd4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rpd4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2021-05-04 16:06:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 4 16:06:17.270: INFO: Pod "webserver-deployment-795d758f88-nxshd" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-nxshd webserver-deployment-795d758f88- deployment-8071 /api/v1/namespaces/deployment-8071/pods/webserver-deployment-795d758f88-nxshd ae43adea-9cf2-4dfc-a247-79e3dee95a3b 26420 0 2021-05-04 16:06:15 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.112" ], "mac": "4e:67:2e:43:7b:25", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.112" ], "mac": "4e:67:2e:43:7b:25", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 b16796fd-3432-4964-909c-8889bbb52c89 0xc0003c019f 0xc0003c01d0}] [] [{kube-controller-manager Update v1 2021-05-04 16:06:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b16796fd-3432-4964-909c-8889bbb52c89\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-05-04 16:06:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:hostIP":{},"f:startTime":{}}}} {multus Update v1 2021-05-04 16:06:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}},"f:status":{"f:containerStatuses":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rpd4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rpd4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rpd4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2021-05-04 16:06:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:nil,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 4 16:06:17.270: INFO: Pod "webserver-deployment-dd94f59b7-2rzdv" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-2rzdv webserver-deployment-dd94f59b7- deployment-8071 /api/v1/namespaces/deployment-8071/pods/webserver-deployment-dd94f59b7-2rzdv 8e02349d-6ebd-4086-bfc0-bc98943dd058 26288 0 2021-05-04 16:06:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.107" ], "mac": "46:2a:05:dd:93:90", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.107" ], "mac": "46:2a:05:dd:93:90", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 67ae3283-2970-4083-a391-be290c38a85c 0xc0003c06cf 0xc0003c0760}] [] [{kube-controller-manager Update v1 2021-05-04 16:06:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67ae3283-2970-4083-a391-be290c38a85c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-04 16:06:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-04 16:06:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.107\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rpd4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rpd4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rpd4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.3.107,StartTime:2021-05-04 16:06:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-04 16:06:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://01b18af15966e0ac6fba6a6a74d0c2b1a5c0712f060d66c4474361cf17ee31c1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.107,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 4 16:06:17.271: INFO: Pod "webserver-deployment-dd94f59b7-684p8" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-684p8 webserver-deployment-dd94f59b7- deployment-8071 /api/v1/namespaces/deployment-8071/pods/webserver-deployment-dd94f59b7-684p8 ecb9b508-e853-4938-8a0a-a80288618928 26160 0 2021-05-04 16:06:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.78" ], "mac": "2a:7c:bb:29:2e:64", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.78" ], "mac": "2a:7c:bb:29:2e:64", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 67ae3283-2970-4083-a391-be290c38a85c 0xc0003c0fef 0xc0003c1060}] [] [{kube-controller-manager Update v1 2021-05-04 16:06:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67ae3283-2970-4083-a391-be290c38a85c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-04 16:06:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-04 16:06:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.78\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rpd4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rpd4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rpd4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.4.78,StartTime:2021-05-04 16:06:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-04 16:06:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://ee6d8ec1331682b3491d9521f85dc59a20143a64f8c59972e16c69473099486c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.78,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 4 16:06:17.271: INFO: Pod "webserver-deployment-dd94f59b7-d998n" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-d998n webserver-deployment-dd94f59b7- deployment-8071 /api/v1/namespaces/deployment-8071/pods/webserver-deployment-dd94f59b7-d998n 22ff8620-6139-464b-b699-7c5244561568 26246 0 2021-05-04 16:06:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.80" ], "mac": "d6:b8:5e:18:f1:25", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.80" ], "mac": "d6:b8:5e:18:f1:25", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 67ae3283-2970-4083-a391-be290c38a85c 0xc0003c162f 0xc0003c16b0}] [] [{kube-controller-manager Update v1 2021-05-04 16:06:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67ae3283-2970-4083-a391-be290c38a85c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-04 16:06:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-04 16:06:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.80\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rpd4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rpd4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rpd4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.4.80,StartTime:2021-05-04 16:06:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-04 16:06:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://846d0f20bd92b3d815efb3ece026d408e1baf8b3cf28287557efa8ad0726b0ee,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.80,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 4 16:06:17.271: INFO: Pod "webserver-deployment-dd94f59b7-kpkhd" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-kpkhd webserver-deployment-dd94f59b7- deployment-8071 /api/v1/namespaces/deployment-8071/pods/webserver-deployment-dd94f59b7-kpkhd 2eab64f2-3476-4fcc-ad91-1d4ebd63fcff 26277 0 2021-05-04 16:06:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.108" ], "mac": "22:a3:e3:0a:b6:8d", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.108" ], "mac": "22:a3:e3:0a:b6:8d", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 67ae3283-2970-4083-a391-be290c38a85c 0xc0003c1b3f 0xc0003c1ba0}] [] [{kube-controller-manager Update v1 2021-05-04 16:06:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67ae3283-2970-4083-a391-be290c38a85c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-04 16:06:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-04 16:06:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.108\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rpd4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rpd4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rpd4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.3.108,StartTime:2021-05-04 16:06:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-04 16:06:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://03e2bb7033584270d91766f264fcc923ac55578321bb6d4f03804537703317c7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.108,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 4 16:06:17.272: INFO: Pod "webserver-deployment-dd94f59b7-pzv5c" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-pzv5c webserver-deployment-dd94f59b7- deployment-8071 /api/v1/namespaces/deployment-8071/pods/webserver-deployment-dd94f59b7-pzv5c 24afa664-4327-402d-85ef-18d9ebce3d3d 26431 0 2021-05-04 16:06:17 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 67ae3283-2970-4083-a391-be290c38a85c 0xc0003c1fcf 0xc00050a340}] [] [{kube-controller-manager Update v1 2021-05-04 16:06:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67ae3283-2970-4083-a391-be290c38a85c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rpd4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rpd4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rpd4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 4 16:06:17.272: INFO: Pod "webserver-deployment-dd94f59b7-qxggj" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-qxggj webserver-deployment-dd94f59b7- deployment-8071 /api/v1/namespaces/deployment-8071/pods/webserver-deployment-dd94f59b7-qxggj ebb6c044-8cc5-40bb-a9d9-48c491e529ac 26131 0 2021-05-04 16:06:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.104" ], "mac": "ee:53:4e:33:6c:c6", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.104" ], "mac": "ee:53:4e:33:6c:c6", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 67ae3283-2970-4083-a391-be290c38a85c 0xc00050a61f 0xc00050a670}] [] [{kube-controller-manager Update v1 2021-05-04 16:06:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67ae3283-2970-4083-a391-be290c38a85c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-04 16:06:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-04 16:06:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.104\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rpd4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rpd4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rpd4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.3.104,StartTime:2021-05-04 16:06:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-04 16:06:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://ce025ee77b57b761e628474d1e00e276ed74b1883b85c0318f4c72defecea6a7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.104,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 4 16:06:17.272: INFO: Pod "webserver-deployment-dd94f59b7-rgqw9" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-rgqw9 webserver-deployment-dd94f59b7- deployment-8071 /api/v1/namespaces/deployment-8071/pods/webserver-deployment-dd94f59b7-rgqw9 a025133d-60cd-4b59-8099-73bd7a84cd26 26214 0 2021-05-04 16:06:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.81" ], "mac": "ea:95:27:49:8f:74", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.81" ], "mac": "ea:95:27:49:8f:74", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 67ae3283-2970-4083-a391-be290c38a85c 0xc00050aa8f 0xc00050ab00}] [] [{kube-controller-manager Update v1 2021-05-04 16:06:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67ae3283-2970-4083-a391-be290c38a85c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-04 16:06:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-04 16:06:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.81\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rpd4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rpd4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rpd4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.4.81,StartTime:2021-05-04 16:06:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-04 16:06:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://d1dd07464a2c82f6ff834b44cae2bf368cca635938ca83733ea5389b8eadb416,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.81,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 4 16:06:17.272: INFO: Pod "webserver-deployment-dd94f59b7-rrdlr" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-rrdlr webserver-deployment-dd94f59b7- deployment-8071 /api/v1/namespaces/deployment-8071/pods/webserver-deployment-dd94f59b7-rrdlr c2e5640f-0447-4090-90a9-90837256aeee 26128 0 2021-05-04 16:06:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.105" ], "mac": "b6:0c:fe:8a:13:23", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.105" ], "mac": "b6:0c:fe:8a:13:23", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 67ae3283-2970-4083-a391-be290c38a85c 0xc00050b03f 0xc00050b070}] [] [{kube-controller-manager Update v1 2021-05-04 16:06:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67ae3283-2970-4083-a391-be290c38a85c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-04 16:06:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-04 16:06:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.105\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rpd4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rpd4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rpd4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.3.105,StartTime:2021-05-04 16:06:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-04 16:06:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://b20530c7ced24140544ef0ddff76f5e75a8843f3544d848cef4b85a1620b30b1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.105,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 4 16:06:17.273: INFO: Pod "webserver-deployment-dd94f59b7-vpb6l" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-vpb6l webserver-deployment-dd94f59b7- deployment-8071 /api/v1/namespaces/deployment-8071/pods/webserver-deployment-dd94f59b7-vpb6l 93689b2b-2b39-4dbd-a7bc-98d9d5a129f7 26224 0 2021-05-04 16:06:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.106" ], "mac": "c6:89:e2:19:bc:85", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.106" ], "mac": "c6:89:e2:19:bc:85", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 67ae3283-2970-4083-a391-be290c38a85c 0xc00050b40f 0xc00050b420}] [] [{kube-controller-manager Update v1 2021-05-04 16:06:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67ae3283-2970-4083-a391-be290c38a85c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-04 16:06:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-04 16:06:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.106\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rpd4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rpd4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rpd4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:06:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.3.106,StartTime:2021-05-04 16:06:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-04 16:06:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://3952111d48de28d1cb44021083221e8c8a7ec633b350d2bf758e674a5988c20c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.106,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:06:17.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8071" for this suite. • [SLOW TEST:16.106 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":15,"skipped":230,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:06:06.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:06:06.310: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:06:18.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4278" for this suite. • [SLOW TEST:12.072 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":182,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:06:04.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:06:04.304: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 4 16:06:14.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8910 --namespace=crd-publish-openapi-8910 create -f -' May 4 16:06:14.647: INFO: stderr: "" May 4 16:06:14.647: INFO: stdout: "e2e-test-crd-publish-openapi-3680-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 4 16:06:14.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8910 --namespace=crd-publish-openapi-8910 delete e2e-test-crd-publish-openapi-3680-crds test-cr' May 4 16:06:14.807: INFO: stderr: "" May 4 16:06:14.807: INFO: stdout: "e2e-test-crd-publish-openapi-3680-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 4 16:06:14.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8910 --namespace=crd-publish-openapi-8910 apply -f -' May 4 16:06:15.095: INFO: stderr: "" May 4 16:06:15.095: INFO: stdout: "e2e-test-crd-publish-openapi-3680-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 4 16:06:15.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8910 --namespace=crd-publish-openapi-8910 delete e2e-test-crd-publish-openapi-3680-crds test-cr' May 4 16:06:15.248: INFO: stderr: "" May 4 16:06:15.248: INFO: stdout: "e2e-test-crd-publish-openapi-3680-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 4 16:06:15.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8910 explain e2e-test-crd-publish-openapi-3680-crds' May 4 16:06:15.535: INFO: stderr: "" May 4 16:06:15.535: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3680-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:06:18.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8910" for this suite. • [SLOW TEST:14.258 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:05:55.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:05:56.047: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"5421d100-85ac-4e81-bcdf-a0d72db39d6e", Controller:(*bool)(0xc0045845e2), BlockOwnerDeletion:(*bool)(0xc0045845e3)}} May 4 16:05:56.051: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"39935b70-e3cd-4377-aa10-a3dfeff14555", Controller:(*bool)(0xc0007ae7ea), BlockOwnerDeletion:(*bool)(0xc0007ae7eb)}} May 4 16:05:56.055: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"49789de2-b4aa-4e1a-afe6-6ddace105823", Controller:(*bool)(0xc000481232), BlockOwnerDeletion:(*bool)(0xc000481233)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:06:21.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1996" for this suite. • [SLOW TEST:25.079 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":14,"skipped":461,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:06:18.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 4 16:06:18.411: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1dcbf007-7484-4856-a71a-2e268a4480f3" in namespace "projected-4728" to be "Succeeded or Failed" May 4 16:06:18.413: INFO: Pod "downwardapi-volume-1dcbf007-7484-4856-a71a-2e268a4480f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144208ms May 4 16:06:20.416: INFO: Pod "downwardapi-volume-1dcbf007-7484-4856-a71a-2e268a4480f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005128809s May 4 16:06:22.418: INFO: Pod "downwardapi-volume-1dcbf007-7484-4856-a71a-2e268a4480f3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007680884s May 4 16:06:24.422: INFO: Pod "downwardapi-volume-1dcbf007-7484-4856-a71a-2e268a4480f3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011526566s May 4 16:06:26.426: INFO: Pod "downwardapi-volume-1dcbf007-7484-4856-a71a-2e268a4480f3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.014891407s May 4 16:06:28.429: INFO: Pod "downwardapi-volume-1dcbf007-7484-4856-a71a-2e268a4480f3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.018818448s May 4 16:06:30.433: INFO: Pod "downwardapi-volume-1dcbf007-7484-4856-a71a-2e268a4480f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.022569248s STEP: Saw pod success May 4 16:06:30.433: INFO: Pod "downwardapi-volume-1dcbf007-7484-4856-a71a-2e268a4480f3" satisfied condition "Succeeded or Failed" May 4 16:06:30.436: INFO: Trying to get logs from node node1 pod downwardapi-volume-1dcbf007-7484-4856-a71a-2e268a4480f3 container client-container: STEP: delete the pod May 4 16:06:30.451: INFO: Waiting for pod downwardapi-volume-1dcbf007-7484-4856-a71a-2e268a4480f3 to disappear May 4 16:06:30.453: INFO: Pod downwardapi-volume-1dcbf007-7484-4856-a71a-2e268a4480f3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:06:30.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4728" for this suite. • [SLOW TEST:12.083 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":188,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:06:08.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 4 16:06:20.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 4 16:06:20.147: INFO: Pod pod-with-poststart-exec-hook still exists May 4 16:06:22.148: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 4 16:06:22.151: INFO: Pod pod-with-poststart-exec-hook still exists May 4 16:06:24.147: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 4 16:06:24.151: INFO: Pod pod-with-poststart-exec-hook still exists May 4 16:06:26.147: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 4 16:06:26.150: INFO: Pod pod-with-poststart-exec-hook still exists May 4 16:06:28.148: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 4 16:06:28.152: INFO: Pod pod-with-poststart-exec-hook still exists May 4 16:06:30.147: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 4 16:06:30.151: INFO: Pod pod-with-poststart-exec-hook still exists May 4 16:06:32.148: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 4 16:06:32.152: INFO: Pod pod-with-poststart-exec-hook still exists May 4 16:06:34.148: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 4 16:06:34.152: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:06:34.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-719" for this suite. • [SLOW TEST:26.106 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:05:38.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:06:34.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1334" for this suite. • [SLOW TEST:56.299 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":158,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:06:30.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-d5f4fad5-485e-4491-9f34-956fee5e81a9 STEP: Creating a pod to test consume configMaps May 4 16:06:30.576: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-72499e5c-00de-452a-bf0a-869577d000e9" in namespace "projected-4666" to be "Succeeded or Failed" May 4 16:06:30.578: INFO: Pod "pod-projected-configmaps-72499e5c-00de-452a-bf0a-869577d000e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089231ms May 4 16:06:32.583: INFO: Pod "pod-projected-configmaps-72499e5c-00de-452a-bf0a-869577d000e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006602628s May 4 16:06:34.587: INFO: Pod "pod-projected-configmaps-72499e5c-00de-452a-bf0a-869577d000e9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01132243s May 4 16:06:36.591: INFO: Pod "pod-projected-configmaps-72499e5c-00de-452a-bf0a-869577d000e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01507599s STEP: Saw pod success May 4 16:06:36.591: INFO: Pod "pod-projected-configmaps-72499e5c-00de-452a-bf0a-869577d000e9" satisfied condition "Succeeded or Failed" May 4 16:06:36.593: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-72499e5c-00de-452a-bf0a-869577d000e9 container projected-configmap-volume-test: STEP: delete the pod May 4 16:06:36.607: INFO: Waiting for pod pod-projected-configmaps-72499e5c-00de-452a-bf0a-869577d000e9 to disappear May 4 16:06:36.609: INFO: Pod pod-projected-configmaps-72499e5c-00de-452a-bf0a-869577d000e9 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:06:36.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4666" for this suite. • [SLOW TEST:6.078 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:06:21.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 4 16:06:21.613: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 4 16:06:23.621: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741181, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741181, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741181, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741181, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 16:06:25.624: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741181, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741181, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741181, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741181, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 16:06:27.625: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741181, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741181, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741181, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741181, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 16:06:29.625: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741181, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741181, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741181, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741181, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 16:06:31.625: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741181, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741181, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741181, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741181, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 16:06:33.625: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741181, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741181, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741181, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741181, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 4 16:06:36.631: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:06:36.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6555" for this suite. STEP: Destroying namespace "webhook-6555-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.622 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":15,"skipped":468,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:05:56.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:299 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller May 4 16:05:56.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4451 create -f -' May 4 16:05:56.793: INFO: stderr: "" May 4 16:05:56.793: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 4 16:05:56.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4451 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 4 16:05:56.942: INFO: stderr: "" May 4 16:05:56.942: INFO: stdout: "update-demo-nautilus-l8rd2 update-demo-nautilus-z5x78 " May 4 16:05:56.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4451 get pods update-demo-nautilus-l8rd2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 4 16:05:57.096: INFO: stderr: "" May 4 16:05:57.097: INFO: stdout: "" May 4 16:05:57.097: INFO: update-demo-nautilus-l8rd2 is created but not running May 4 16:06:02.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4451 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 4 16:06:02.250: INFO: stderr: "" May 4 16:06:02.250: INFO: stdout: "update-demo-nautilus-l8rd2 update-demo-nautilus-z5x78 " May 4 16:06:02.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4451 get pods update-demo-nautilus-l8rd2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 4 16:06:02.400: INFO: stderr: "" May 4 16:06:02.400: INFO: stdout: "" May 4 16:06:02.400: INFO: update-demo-nautilus-l8rd2 is created but not running May 4 16:06:07.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4451 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 4 16:06:07.555: INFO: stderr: "" May 4 16:06:07.555: INFO: stdout: "update-demo-nautilus-l8rd2 update-demo-nautilus-z5x78 " May 4 16:06:07.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4451 get pods update-demo-nautilus-l8rd2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 4 16:06:07.701: INFO: stderr: "" May 4 16:06:07.701: INFO: stdout: "true" May 4 16:06:07.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4451 get pods update-demo-nautilus-l8rd2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 4 16:06:07.848: INFO: stderr: "" May 4 16:06:07.848: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 4 16:06:07.848: INFO: validating pod update-demo-nautilus-l8rd2 May 4 16:06:07.852: INFO: got data: { "image": "nautilus.jpg" } May 4 16:06:07.852: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 4 16:06:07.852: INFO: update-demo-nautilus-l8rd2 is verified up and running May 4 16:06:07.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4451 get pods update-demo-nautilus-z5x78 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 4 16:06:08.020: INFO: stderr: "" May 4 16:06:08.020: INFO: stdout: "true" May 4 16:06:08.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4451 get pods update-demo-nautilus-z5x78 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 4 16:06:08.174: INFO: stderr: "" May 4 16:06:08.174: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 4 16:06:08.174: INFO: validating pod update-demo-nautilus-z5x78 May 4 16:06:08.179: INFO: got data: { "image": "nautilus.jpg" } May 4 16:06:08.179: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 4 16:06:08.179: INFO: update-demo-nautilus-z5x78 is verified up and running STEP: scaling down the replication controller May 4 16:06:08.188: INFO: scanned /root for discovery docs: May 4 16:06:08.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4451 scale rc update-demo-nautilus --replicas=1 --timeout=5m' May 4 16:06:08.382: INFO: stderr: "" May 4 16:06:08.382: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 4 16:06:08.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4451 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 4 16:06:08.570: INFO: stderr: "" May 4 16:06:08.570: INFO: stdout: "update-demo-nautilus-l8rd2 update-demo-nautilus-z5x78 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 4 16:06:13.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4451 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 4 16:06:13.712: INFO: stderr: "" May 4 16:06:13.712: INFO: stdout: "update-demo-nautilus-l8rd2 update-demo-nautilus-z5x78 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 4 16:06:18.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4451 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 4 16:06:18.881: INFO: stderr: "" May 4 16:06:18.881: INFO: stdout: "update-demo-nautilus-l8rd2 update-demo-nautilus-z5x78 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 4 16:06:23.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4451 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 4 16:06:24.048: INFO: stderr: "" May 4 16:06:24.049: INFO: stdout: "update-demo-nautilus-z5x78 " May 4 16:06:24.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4451 get pods update-demo-nautilus-z5x78 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 4 16:06:24.219: INFO: stderr: "" May 4 16:06:24.219: INFO: stdout: "true" May 4 16:06:24.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4451 get pods update-demo-nautilus-z5x78 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 4 16:06:24.371: INFO: stderr: "" May 4 16:06:24.371: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 4 16:06:24.371: INFO: validating pod update-demo-nautilus-z5x78 May 4 16:06:24.375: INFO: got data: { "image": "nautilus.jpg" } May 4 16:06:24.375: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 4 16:06:24.375: INFO: update-demo-nautilus-z5x78 is verified up and running STEP: scaling up the replication controller May 4 16:06:24.386: INFO: scanned /root for discovery docs: May 4 16:06:24.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4451 scale rc update-demo-nautilus --replicas=2 --timeout=5m' May 4 16:06:24.595: INFO: stderr: "" May 4 16:06:24.595: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 4 16:06:24.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4451 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 4 16:06:24.744: INFO: stderr: "" May 4 16:06:24.744: INFO: stdout: "update-demo-nautilus-qz546 update-demo-nautilus-z5x78 " May 4 16:06:24.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4451 get pods update-demo-nautilus-qz546 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 4 16:06:24.903: INFO: stderr: "" May 4 16:06:24.903: INFO: stdout: "" May 4 16:06:24.903: INFO: update-demo-nautilus-qz546 is created but not running May 4 16:06:29.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4451 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 4 16:06:30.057: INFO: stderr: "" May 4 16:06:30.057: INFO: stdout: "update-demo-nautilus-qz546 update-demo-nautilus-z5x78 " May 4 16:06:30.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4451 get pods update-demo-nautilus-qz546 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 4 16:06:30.204: INFO: stderr: "" May 4 16:06:30.204: INFO: stdout: "" May 4 16:06:30.205: INFO: update-demo-nautilus-qz546 is created but not running May 4 16:06:35.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4451 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 4 16:06:35.374: INFO: stderr: "" May 4 16:06:35.374: INFO: stdout: "update-demo-nautilus-qz546 update-demo-nautilus-z5x78 " May 4 16:06:35.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4451 get pods update-demo-nautilus-qz546 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 4 16:06:35.539: INFO: stderr: "" May 4 16:06:35.540: INFO: stdout: "true" May 4 16:06:35.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4451 get pods update-demo-nautilus-qz546 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 4 16:06:35.685: INFO: stderr: "" May 4 16:06:35.685: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 4 16:06:35.685: INFO: validating pod update-demo-nautilus-qz546 May 4 16:06:35.691: INFO: got data: { "image": "nautilus.jpg" } May 4 16:06:35.691: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 4 16:06:35.691: INFO: update-demo-nautilus-qz546 is verified up and running May 4 16:06:35.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4451 get pods update-demo-nautilus-z5x78 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 4 16:06:35.830: INFO: stderr: "" May 4 16:06:35.830: INFO: stdout: "true" May 4 16:06:35.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4451 get pods update-demo-nautilus-z5x78 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 4 16:06:35.981: INFO: stderr: "" May 4 16:06:35.981: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 4 16:06:35.981: INFO: validating pod update-demo-nautilus-z5x78 May 4 16:06:35.984: INFO: got data: { "image": "nautilus.jpg" } May 4 16:06:35.984: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 4 16:06:35.984: INFO: update-demo-nautilus-z5x78 is verified up and running STEP: using delete to clean up resources May 4 16:06:35.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4451 delete --grace-period=0 --force -f -' May 4 16:06:36.119: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 4 16:06:36.119: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 4 16:06:36.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4451 get rc,svc -l name=update-demo --no-headers' May 4 16:06:36.311: INFO: stderr: "No resources found in kubectl-4451 namespace.\n" May 4 16:06:36.311: INFO: stdout: "" May 4 16:06:36.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4451 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 4 16:06:36.489: INFO: stderr: "" May 4 16:06:36.489: INFO: stdout: "update-demo-nautilus-qz546\nupdate-demo-nautilus-z5x78\n" May 4 16:06:36.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4451 get rc,svc -l name=update-demo --no-headers' May 4 16:06:37.154: INFO: stderr: "No resources found in kubectl-4451 namespace.\n" May 4 16:06:37.154: INFO: stdout: "" May 4 16:06:37.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4451 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 4 16:06:37.323: INFO: stderr: "" May 4 16:06:37.323: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:06:37.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4451" for this suite. • [SLOW TEST:40.879 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:297 should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":-1,"completed":20,"skipped":234,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:06:17.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-d6b6c23b-4661-4faf-8dc9-ba36a2b79884 STEP: Creating configMap with name cm-test-opt-upd-104f123b-dc10-4731-a972-09cd9d7184d7 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-d6b6c23b-4661-4faf-8dc9-ba36a2b79884 STEP: Updating configmap cm-test-opt-upd-104f123b-dc10-4731-a972-09cd9d7184d7 STEP: Creating configMap with name cm-test-opt-create-c9d10952-32ba-46c9-a5ca-e824472ab124 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:06:37.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5458" for this suite. • [SLOW TEST:20.513 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":266,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":224,"failed":0} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:06:36.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC May 4 16:06:36.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9760 create -f -' May 4 16:06:36.980: INFO: stderr: "" May 4 16:06:36.980: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. May 4 16:06:37.986: INFO: Selector matched 1 pods for map[app:agnhost] May 4 16:06:37.986: INFO: Found 0 / 1 May 4 16:06:38.984: INFO: Selector matched 1 pods for map[app:agnhost] May 4 16:06:38.984: INFO: Found 0 / 1 May 4 16:06:39.983: INFO: Selector matched 1 pods for map[app:agnhost] May 4 16:06:39.983: INFO: Found 0 / 1 May 4 16:06:40.985: INFO: Selector matched 1 pods for map[app:agnhost] May 4 16:06:40.985: INFO: Found 0 / 1 May 4 16:06:41.983: INFO: Selector matched 1 pods for map[app:agnhost] May 4 16:06:41.983: INFO: Found 1 / 1 May 4 16:06:41.983: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 4 16:06:41.985: INFO: Selector matched 1 pods for map[app:agnhost] May 4 16:06:41.985: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 4 16:06:41.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9760 patch pod agnhost-primary-kkgfd -p {"metadata":{"annotations":{"x":"y"}}}' May 4 16:06:42.162: INFO: stderr: "" May 4 16:06:42.162: INFO: stdout: "pod/agnhost-primary-kkgfd patched\n" STEP: checking annotations May 4 16:06:42.165: INFO: Selector matched 1 pods for map[app:agnhost] May 4 16:06:42.165: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:06:42.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9760" for this suite. • [SLOW TEST:5.552 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":-1,"completed":14,"skipped":224,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:04:29.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-6243 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-6243 I0504 16:04:30.002498 24 runners.go:190] Created replication controller with name: externalname-service, namespace: services-6243, replica count: 2 I0504 16:04:33.053147 24 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0504 16:04:36.053491 24 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 4 16:04:36.053: INFO: Creating new exec pod May 4 16:04:41.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 4 16:04:41.826: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" May 4 16:04:41.826: INFO: stdout: "" May 4 16:04:41.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.233.34.213 80' May 4 16:04:42.088: INFO: stderr: "+ nc -zv -t -w 2 10.233.34.213 80\nConnection to 10.233.34.213 80 port [tcp/http] succeeded!\n" May 4 16:04:42.088: INFO: stdout: "" May 4 16:04:42.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:04:42.646: INFO: rc: 1 May 4 16:04:42.646: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:04:43.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:04:45.021: INFO: rc: 1 May 4 16:04:45.022: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:04:45.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:04:46.018: INFO: rc: 1 May 4 16:04:46.018: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:04:46.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:04:47.071: INFO: rc: 1 May 4 16:04:47.071: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:04:47.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:04:48.095: INFO: rc: 1 May 4 16:04:48.095: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:04:48.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:04:49.325: INFO: rc: 1 May 4 16:04:49.325: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:04:49.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:04:49.905: INFO: rc: 1 May 4 16:04:49.905: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:04:50.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:04:51.001: INFO: rc: 1 May 4 16:04:51.001: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:04:51.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:04:51.982: INFO: rc: 1 May 4 16:04:51.983: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:04:52.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:04:52.990: INFO: rc: 1 May 4 16:04:52.990: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:04:53.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:04:53.925: INFO: rc: 1 May 4 16:04:53.925: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:04:54.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:04:54.929: INFO: rc: 1 May 4 16:04:54.929: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:04:55.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:04:55.893: INFO: rc: 1 May 4 16:04:55.894: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:04:56.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:04:56.884: INFO: rc: 1 May 4 16:04:56.884: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:04:57.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:04:58.413: INFO: rc: 1 May 4 16:04:58.413: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:04:58.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:04:58.926: INFO: rc: 1 May 4 16:04:58.926: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:04:59.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:04:59.947: INFO: rc: 1 May 4 16:04:59.947: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:00.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:00.898: INFO: rc: 1 May 4 16:05:00.898: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:01.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:01.911: INFO: rc: 1 May 4 16:05:01.911: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:02.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:02.915: INFO: rc: 1 May 4 16:05:02.915: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:03.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:04.012: INFO: rc: 1 May 4 16:05:04.012: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:04.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:04.902: INFO: rc: 1 May 4 16:05:04.902: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:05.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:06.063: INFO: rc: 1 May 4 16:05:06.063: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:06.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:07.030: INFO: rc: 1 May 4 16:05:07.030: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:07.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:07.915: INFO: rc: 1 May 4 16:05:07.915: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:08.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:09.054: INFO: rc: 1 May 4 16:05:09.054: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:09.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:09.898: INFO: rc: 1 May 4 16:05:09.899: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:10.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:10.913: INFO: rc: 1 May 4 16:05:10.913: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:11.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:12.286: INFO: rc: 1 May 4 16:05:12.286: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:12.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:12.899: INFO: rc: 1 May 4 16:05:12.899: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:13.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:13.895: INFO: rc: 1 May 4 16:05:13.895: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:14.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:15.107: INFO: rc: 1 May 4 16:05:15.108: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:15.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:16.205: INFO: rc: 1 May 4 16:05:16.205: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:16.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:16.919: INFO: rc: 1 May 4 16:05:16.919: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:17.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:18.351: INFO: rc: 1 May 4 16:05:18.351: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:18.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:18.923: INFO: rc: 1 May 4 16:05:18.924: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:19.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:19.946: INFO: rc: 1 May 4 16:05:19.946: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:20.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:20.909: INFO: rc: 1 May 4 16:05:20.909: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:21.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:21.946: INFO: rc: 1 May 4 16:05:21.946: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:22.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:22.949: INFO: rc: 1 May 4 16:05:22.949: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:23.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:23.934: INFO: rc: 1 May 4 16:05:23.934: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:24.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:24.897: INFO: rc: 1 May 4 16:05:24.897: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:25.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:25.910: INFO: rc: 1 May 4 16:05:25.910: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:26.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:26.909: INFO: rc: 1 May 4 16:05:26.909: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:27.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:28.225: INFO: rc: 1 May 4 16:05:28.225: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:28.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:28.923: INFO: rc: 1 May 4 16:05:28.923: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:29.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:29.893: INFO: rc: 1 May 4 16:05:29.893: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:30.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:30.913: INFO: rc: 1 May 4 16:05:30.913: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:31.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:32.092: INFO: rc: 1 May 4 16:05:32.092: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:32.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:32.913: INFO: rc: 1 May 4 16:05:32.913: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:33.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:33.933: INFO: rc: 1 May 4 16:05:33.933: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:34.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:34.964: INFO: rc: 1 May 4 16:05:34.964: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:35.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:36.081: INFO: rc: 1 May 4 16:05:36.081: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:36.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:36.923: INFO: rc: 1 May 4 16:05:36.923: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:37.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:37.917: INFO: rc: 1 May 4 16:05:37.917: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:38.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:39.048: INFO: rc: 1 May 4 16:05:39.048: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:39.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:40.202: INFO: rc: 1 May 4 16:05:40.202: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:40.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:40.940: INFO: rc: 1 May 4 16:05:40.940: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:41.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:41.931: INFO: rc: 1 May 4 16:05:41.931: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:42.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:42.903: INFO: rc: 1 May 4 16:05:42.903: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:43.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:43.925: INFO: rc: 1 May 4 16:05:43.925: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:44.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:45.015: INFO: rc: 1 May 4 16:05:45.015: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:45.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:45.914: INFO: rc: 1 May 4 16:05:45.914: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:46.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:46.909: INFO: rc: 1 May 4 16:05:46.909: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:47.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:47.922: INFO: rc: 1 May 4 16:05:47.922: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:48.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:48.927: INFO: rc: 1 May 4 16:05:48.927: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:49.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:49.893: INFO: rc: 1 May 4 16:05:49.893: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:50.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:50.891: INFO: rc: 1 May 4 16:05:50.891: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:51.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:52.035: INFO: rc: 1 May 4 16:05:52.035: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:52.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:52.940: INFO: rc: 1 May 4 16:05:52.940: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:53.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:54.037: INFO: rc: 1 May 4 16:05:54.037: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:54.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:54.927: INFO: rc: 1 May 4 16:05:54.927: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:55.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:55.923: INFO: rc: 1 May 4 16:05:55.923: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:56.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:57.919: INFO: rc: 1 May 4 16:05:57.919: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:58.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:05:59.167: INFO: rc: 1 May 4 16:05:59.168: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:05:59.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:00.570: INFO: rc: 1 May 4 16:06:00.570: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:00.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:01.214: INFO: rc: 1 May 4 16:06:01.214: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:01.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:01.995: INFO: rc: 1 May 4 16:06:01.995: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:02.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:03.165: INFO: rc: 1 May 4 16:06:03.165: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:03.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:03.976: INFO: rc: 1 May 4 16:06:03.976: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:04.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:05.170: INFO: rc: 1 May 4 16:06:05.170: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:05.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:06.532: INFO: rc: 1 May 4 16:06:06.532: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:06.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:07.003: INFO: rc: 1 May 4 16:06:07.003: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:07.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:07.908: INFO: rc: 1 May 4 16:06:07.908: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:08.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:09.275: INFO: rc: 1 May 4 16:06:09.275: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:09.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:10.509: INFO: rc: 1 May 4 16:06:10.509: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:10.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:10.928: INFO: rc: 1 May 4 16:06:10.928: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:11.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:11.896: INFO: rc: 1 May 4 16:06:11.896: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:12.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:12.932: INFO: rc: 1 May 4 16:06:12.932: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:13.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:13.891: INFO: rc: 1 May 4 16:06:13.891: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:14.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:14.898: INFO: rc: 1 May 4 16:06:14.899: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:15.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:15.991: INFO: rc: 1 May 4 16:06:15.991: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:16.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:17.124: INFO: rc: 1 May 4 16:06:17.124: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:17.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:17.969: INFO: rc: 1 May 4 16:06:17.969: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:18.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:18.952: INFO: rc: 1 May 4 16:06:18.952: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:19.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:20.305: INFO: rc: 1 May 4 16:06:20.305: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:20.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:21.075: INFO: rc: 1 May 4 16:06:21.075: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:21.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:22.028: INFO: rc: 1 May 4 16:06:22.028: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:22.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:23.190: INFO: rc: 1 May 4 16:06:23.190: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:23.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:24.316: INFO: rc: 1 May 4 16:06:24.316: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:24.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:25.557: INFO: rc: 1 May 4 16:06:25.557: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:25.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:26.032: INFO: rc: 1 May 4 16:06:26.032: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:26.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:27.085: INFO: rc: 1 May 4 16:06:27.085: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:27.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:28.262: INFO: rc: 1 May 4 16:06:28.262: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:28.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:29.000: INFO: rc: 1 May 4 16:06:29.000: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:29.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:30.258: INFO: rc: 1 May 4 16:06:30.258: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:30.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:31.324: INFO: rc: 1 May 4 16:06:31.324: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:31.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:31.959: INFO: rc: 1 May 4 16:06:31.959: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:32.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:32.922: INFO: rc: 1 May 4 16:06:32.922: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:33.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:33.916: INFO: rc: 1 May 4 16:06:33.916: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:34.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:34.938: INFO: rc: 1 May 4 16:06:34.938: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:35.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:36.054: INFO: rc: 1 May 4 16:06:36.054: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:36.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:36.933: INFO: rc: 1 May 4 16:06:36.933: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:37.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:38.071: INFO: rc: 1 May 4 16:06:38.071: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:38.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:39.056: INFO: rc: 1 May 4 16:06:39.056: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:39.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:40.253: INFO: rc: 1 May 4 16:06:40.253: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:40.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:41.079: INFO: rc: 1 May 4 16:06:41.079: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:41.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:41.900: INFO: rc: 1 May 4 16:06:41.901: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:42.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:43.023: INFO: rc: 1 May 4 16:06:43.023: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:43.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847' May 4 16:06:43.350: INFO: rc: 1 May 4 16:06:43.350: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6243 exec execpodhfm2x -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31847: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31847 nc: connect to 10.10.190.207 port 31847 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:43.351: FAIL: Unexpected error: <*errors.errorString | 0xc004686ae0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31847 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31847 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.15() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1760 +0x358 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002d4d800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc002d4d800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc002d4d800, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 May 4 16:06:43.352: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "services-6243". STEP: Found 20 events. May 4 16:06:43.380: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpodhfm2x: { } Scheduled: Successfully assigned services-6243/execpodhfm2x to node2 May 4 16:06:43.380: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for externalname-service-j6fbl: { } Scheduled: Successfully assigned services-6243/externalname-service-j6fbl to node2 May 4 16:06:43.380: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for externalname-service-nktwq: { } Scheduled: Successfully assigned services-6243/externalname-service-nktwq to node1 May 4 16:06:43.380: INFO: At 2021-05-04 16:04:30 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-j6fbl May 4 16:06:43.380: INFO: At 2021-05-04 16:04:30 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-nktwq May 4 16:06:43.380: INFO: At 2021-05-04 16:04:31 +0000 UTC - event for externalname-service-nktwq: {multus } AddedInterface: Add eth0 [10.244.4.48/24] May 4 16:06:43.380: INFO: At 2021-05-04 16:04:32 +0000 UTC - event for externalname-service-j6fbl: {multus } AddedInterface: Add eth0 [10.244.3.66/24] May 4 16:06:43.380: INFO: At 2021-05-04 16:04:32 +0000 UTC - event for externalname-service-nktwq: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.20" May 4 16:06:43.380: INFO: At 2021-05-04 16:04:33 +0000 UTC - event for externalname-service-j6fbl: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.20" in 877.348412ms May 4 16:06:43.380: INFO: At 2021-05-04 16:04:33 +0000 UTC - event for externalname-service-j6fbl: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.20" May 4 16:06:43.380: INFO: At 2021-05-04 16:04:33 +0000 UTC - event for externalname-service-nktwq: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.20" in 726.673334ms May 4 16:06:43.380: INFO: At 2021-05-04 16:04:33 +0000 UTC - event for externalname-service-nktwq: {kubelet node1} Started: Started container externalname-service May 4 16:06:43.380: INFO: At 2021-05-04 16:04:33 +0000 UTC - event for externalname-service-nktwq: {kubelet node1} Created: Created container externalname-service May 4 16:06:43.380: INFO: At 2021-05-04 16:04:34 +0000 UTC - event for externalname-service-j6fbl: {kubelet node2} Started: Started container externalname-service May 4 16:06:43.380: INFO: At 2021-05-04 16:04:34 +0000 UTC - event for externalname-service-j6fbl: {kubelet node2} Created: Created container externalname-service May 4 16:06:43.380: INFO: At 2021-05-04 16:04:38 +0000 UTC - event for execpodhfm2x: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.20" May 4 16:06:43.380: INFO: At 2021-05-04 16:04:38 +0000 UTC - event for execpodhfm2x: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.20" in 508.876926ms May 4 16:06:43.380: INFO: At 2021-05-04 16:04:38 +0000 UTC - event for execpodhfm2x: {multus } AddedInterface: Add eth0 [10.244.3.69/24] May 4 16:06:43.380: INFO: At 2021-05-04 16:04:39 +0000 UTC - event for execpodhfm2x: {kubelet node2} Created: Created container agnhost-container May 4 16:06:43.380: INFO: At 2021-05-04 16:04:39 +0000 UTC - event for execpodhfm2x: {kubelet node2} Started: Started container agnhost-container May 4 16:06:43.382: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:06:43.382: INFO: execpodhfm2x node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:36 +0000 UTC }] May 4 16:06:43.383: INFO: externalname-service-j6fbl node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:30 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:30 +0000 UTC }] May 4 16:06:43.383: INFO: externalname-service-nktwq node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:30 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:30 +0000 UTC }] May 4 16:06:43.383: INFO: May 4 16:06:43.387: INFO: Logging node info for node master1 May 4 16:06:43.389: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 db982204-549e-4532-90a7-a4410878cfc9 27083 0 2021-05-04 14:43:01 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"3e:f0:43:cb:66:52"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-04 14:51:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:46 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:06:36 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:06:36 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:06:36 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:06:36 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:88a0771919594d4187f6704fc7592bf8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8e0a253b-2aa4-4467-879e-567e7ba1ffa4,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:06:43.389: INFO: Logging kubelet events for node master1 May 4 16:06:43.391: INFO: Logging pods the kubelet thinks is on node master1 May 4 16:06:43.418: INFO: coredns-7677f9bb54-qvcd2 started at 2021-05-04 14:46:11 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.418: INFO: Container coredns ready: true, restart count 1 May 4 16:06:43.418: INFO: node-feature-discovery-controller-5bf5c49849-72rn6 started at 2021-05-04 14:51:52 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.418: INFO: Container nfd-controller ready: true, restart count 0 May 4 16:06:43.418: INFO: kube-flannel-qspzk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:06:43.418: INFO: Init container install-cni ready: true, restart count 0 May 4 16:06:43.418: INFO: Container kube-flannel ready: true, restart count 3 May 4 16:06:43.418: INFO: kube-multus-ds-amd64-jflvf started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.418: INFO: Container kube-multus ready: true, restart count 1 May 4 16:06:43.418: INFO: docker-registry-docker-registry-56cbc7bc58-zhf8t started at 2021-05-04 14:48:42 +0000 UTC (0+2 container statuses recorded) May 4 16:06:43.418: INFO: Container docker-registry ready: true, restart count 0 May 4 16:06:43.418: INFO: Container nginx ready: true, restart count 0 May 4 16:06:43.418: INFO: node-exporter-jckjs started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:06:43.418: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:06:43.418: INFO: Container node-exporter ready: true, restart count 0 May 4 16:06:43.418: INFO: kube-scheduler-master1 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.418: INFO: Container kube-scheduler ready: true, restart count 0 May 4 16:06:43.418: INFO: kube-apiserver-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.418: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:06:43.418: INFO: kube-controller-manager-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.418: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:06:43.418: INFO: kube-proxy-8j6ch started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.418: INFO: Container kube-proxy ready: true, restart count 1 W0504 16:06:43.430521 24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:06:43.453: INFO: Latency metrics for node master1 May 4 16:06:43.453: INFO: Logging node info for node master2 May 4 16:06:43.455: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 e2c15170-247b-4e7b-b818-abc807948bf8 27066 0 2021-05-04 14:43:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:e0:10:a0:e0:62"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:32 +0000 UTC,LastTransitionTime:2021-05-04 14:47:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:06:35 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:06:35 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:06:35 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:06:35 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6af568f56589422a9bd68e0270ce0f8c,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:bf27bb77-fad2-4b52-85c3-acb5113fc512,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:06:43.456: INFO: Logging kubelet events for node master2 May 4 16:06:43.458: INFO: Logging pods the kubelet thinks is on node master2 May 4 16:06:43.471: INFO: kube-flannel-cxdfr started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:06:43.471: INFO: Init container install-cni ready: true, restart count 0 May 4 16:06:43.471: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:06:43.471: INFO: kube-multus-ds-amd64-dw8tg started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.471: INFO: Container kube-multus ready: true, restart count 1 May 4 16:06:43.471: INFO: dns-autoscaler-5b7b5c9b6f-zbrsq started at 2021-05-04 14:46:08 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.471: INFO: Container autoscaler ready: true, restart count 1 May 4 16:06:43.471: INFO: node-exporter-9c6qf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:06:43.471: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:06:43.471: INFO: Container node-exporter ready: true, restart count 0 May 4 16:06:43.471: INFO: kube-apiserver-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.471: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:06:43.471: INFO: kube-controller-manager-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.471: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:06:43.471: INFO: kube-scheduler-master2 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.471: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:06:43.471: INFO: kube-proxy-6b5t8 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.471: INFO: Container kube-proxy ready: true, restart count 2 W0504 16:06:43.481592 24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:06:43.510: INFO: Latency metrics for node master2 May 4 16:06:43.510: INFO: Logging node info for node master3 May 4 16:06:43.512: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 b533a646-667e-403c-944b-71dec9cc4851 27057 0 2021-05-04 14:43:51 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:b0:53:14:f6:c9"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:48:50 +0000 UTC,LastTransitionTime:2021-05-04 14:48:50 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:06:35 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:06:35 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:06:35 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:06:35 +0000 UTC,LastTransitionTime:2021-05-04 14:45:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:959373dcda56494486f0c2bb0bb496cc,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:0714ca81-c21e-40d6-a288-48d597238e54,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:06:43.513: INFO: Logging kubelet events for node master3 May 4 16:06:43.515: INFO: Logging pods the kubelet thinks is on node master3 May 4 16:06:43.527: INFO: kube-multus-ds-amd64-cgwz2 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.527: INFO: Container kube-multus ready: true, restart count 1 May 4 16:06:43.527: INFO: coredns-7677f9bb54-pshfb started at 2021-05-04 14:46:06 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.527: INFO: Container coredns ready: true, restart count 1 May 4 16:06:43.527: INFO: node-exporter-wvppn started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:06:43.527: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:06:43.527: INFO: Container node-exporter ready: true, restart count 0 May 4 16:06:43.527: INFO: kube-apiserver-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.527: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:06:43.527: INFO: kube-controller-manager-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.527: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:06:43.527: INFO: kube-scheduler-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.527: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:06:43.527: INFO: kube-proxy-2p5b6 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.527: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:06:43.527: INFO: kube-flannel-wznt8 started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:06:43.527: INFO: Init container install-cni ready: true, restart count 0 May 4 16:06:43.527: INFO: Container kube-flannel ready: true, restart count 1 W0504 16:06:43.540099 24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:06:43.562: INFO: Latency metrics for node master3 May 4 16:06:43.562: INFO: Logging node info for node node1 May 4 16:06:43.564: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 b8cf7e16-d5c7-4e2c-996a-93d93bd4fa1c 27052 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"e2:50:df:03:d2:13"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:54:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:06:35 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:06:35 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:06:35 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:06:35 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bffc023a4ab84df0b0181bc7b8f509e2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:dc08af72-abca-4f1d-bd0f-0e8d8eb97de5,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[@ :],SizeBytes:1002569035,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:371dc6bf7e0c7ce112a29341b000c40d840aef1dbb4fdcb3ae5c0597e28f3061 golang:alpine3.12],SizeBytes:301097267,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:06:43.565: INFO: Logging kubelet events for node node1 May 4 16:06:43.566: INFO: Logging pods the kubelet thinks is on node node1 May 4 16:06:43.584: INFO: collectd-4755t started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:06:43.584: INFO: Container collectd ready: true, restart count 0 May 4 16:06:43.584: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:06:43.584: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:06:43.584: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.584: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:06:43.584: INFO: nginx-proxy-node1 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.584: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:06:43.584: INFO: kube-proxy-t2mbn started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.584: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:06:43.584: INFO: liveness-http started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.584: INFO: Container liveness-http ready: false, restart count 13 May 4 16:06:43.584: INFO: cmk-init-discover-node1-m8vvw started at 2021-05-04 14:54:32 +0000 UTC (0+3 container statuses recorded) May 4 16:06:43.584: INFO: Container discover ready: false, restart count 0 May 4 16:06:43.584: INFO: Container init ready: false, restart count 0 May 4 16:06:43.584: INFO: Container install ready: false, restart count 0 May 4 16:06:43.584: INFO: pod-logs-websocket-b857447f-19c3-483a-a3e8-dd167db76ed0 started at 2021-05-04 16:06:06 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.584: INFO: Container main ready: true, restart count 0 May 4 16:06:43.584: INFO: ss2-2 started at 2021-05-04 16:06:37 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.584: INFO: Container webserver ready: true, restart count 0 May 4 16:06:43.584: INFO: kube-multus-ds-amd64-pkmbz started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.584: INFO: Container kube-multus ready: true, restart count 1 May 4 16:06:43.584: INFO: externalname-service-nktwq started at 2021-05-04 16:04:30 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.584: INFO: Container externalname-service ready: true, restart count 0 May 4 16:06:43.584: INFO: dns-test-03046a35-bcfd-49ea-9f3c-cf4672294d86 started at 2021-05-04 16:06:40 +0000 UTC (0+3 container statuses recorded) May 4 16:06:43.584: INFO: Container jessie-querier ready: false, restart count 0 May 4 16:06:43.584: INFO: Container querier ready: false, restart count 0 May 4 16:06:43.584: INFO: Container webserver ready: false, restart count 0 May 4 16:06:43.584: INFO: ss2-0 started at 2021-05-04 16:06:18 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.584: INFO: Container webserver ready: true, restart count 0 May 4 16:06:43.584: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.584: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 16:06:43.584: INFO: cmk-slg76 started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:06:43.584: INFO: Container nodereport ready: true, restart count 0 May 4 16:06:43.584: INFO: Container reconcile ready: true, restart count 0 May 4 16:06:43.584: INFO: prometheus-k8s-0 started at 2021-05-04 14:56:12 +0000 UTC (0+5 container statuses recorded) May 4 16:06:43.584: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:06:43.584: INFO: Container grafana ready: true, restart count 0 May 4 16:06:43.584: INFO: Container prometheus ready: true, restart count 1 May 4 16:06:43.584: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:06:43.584: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 4 16:06:43.584: INFO: nodeport-test-lxckq started at 2021-05-04 16:06:34 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.584: INFO: Container nodeport-test ready: true, restart count 0 May 4 16:06:43.584: INFO: execpodn9wsj started at 2021-05-04 16:06:40 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.584: INFO: Container agnhost-container ready: false, restart count 0 May 4 16:06:43.584: INFO: kube-flannel-d6pbl started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:06:43.584: INFO: Init container install-cni ready: true, restart count 2 May 4 16:06:43.584: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:06:43.584: INFO: node-feature-discovery-worker-wfgl5 started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.584: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:06:43.584: INFO: prometheus-operator-5bb8cb9d8f-rrrhf started at 2021-05-04 14:56:03 +0000 UTC (0+2 container statuses recorded) May 4 16:06:43.584: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:06:43.584: INFO: Container prometheus-operator ready: true, restart count 0 May 4 16:06:43.584: INFO: node-exporter-k8qd9 started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:06:43.584: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:06:43.584: INFO: Container node-exporter ready: true, restart count 0 W0504 16:06:43.595936 24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:06:43.636: INFO: Latency metrics for node node1 May 4 16:06:43.636: INFO: Logging node info for node node2 May 4 16:06:43.639: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 41567fa7-bb24-4381-9387-e4115195037d 27019 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:98:f5:3b:98:5c"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:06:34 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:06:34 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:06:34 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:06:34 +0000 UTC,LastTransitionTime:2021-05-04 14:45:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d47a1c7ab17f44f2ae7ff788700a8d74,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:df3523a1-a74f-4f8b-beb1-29f5ed8699f3,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:06:43.639: INFO: Logging kubelet events for node node2 May 4 16:06:43.641: INFO: Logging pods the kubelet thinks is on node node2 May 4 16:06:43.668: INFO: liveness-2eec7a00-c1bb-43a0-8c2e-0a8c35203695 started at 2021-05-04 16:06:09 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.668: INFO: Container liveness ready: true, restart count 0 May 4 16:06:43.668: INFO: nginx-proxy-node2 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.668: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:06:43.668: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.668: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 4 16:06:43.668: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.668: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:06:43.668: INFO: cmk-2fmbx started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:06:43.668: INFO: Container nodereport ready: true, restart count 0 May 4 16:06:43.668: INFO: Container reconcile ready: true, restart count 0 May 4 16:06:43.668: INFO: update-demo-nautilus-qz546 started at 2021-05-04 16:06:24 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.668: INFO: Container update-demo ready: false, restart count 0 May 4 16:06:43.668: INFO: externalname-service-j6fbl started at 2021-05-04 16:04:30 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.668: INFO: Container externalname-service ready: true, restart count 0 May 4 16:06:43.668: INFO: execpodhfm2x started at 2021-05-04 16:04:36 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.668: INFO: Container agnhost-container ready: true, restart count 0 May 4 16:06:43.668: INFO: kube-flannel-lnwkk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:06:43.668: INFO: Init container install-cni ready: true, restart count 2 May 4 16:06:43.668: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:06:43.668: INFO: cmk-webhook-6c9d5f8578-fr595 started at 2021-05-04 14:55:15 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.668: INFO: Container cmk-webhook ready: true, restart count 0 May 4 16:06:43.668: INFO: liveness-407c14ad-7a20-4e46-8321-7d673d64b89e started at 2021-05-04 16:06:36 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.668: INFO: Container liveness ready: true, restart count 0 May 4 16:06:43.668: INFO: pod-handle-http-request started at 2021-05-04 16:06:08 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.668: INFO: Container pod-handle-http-request ready: false, restart count 0 May 4 16:06:43.668: INFO: kube-proxy-rfjjf started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.668: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:06:43.669: INFO: node-feature-discovery-worker-jzjqs started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.669: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:06:43.669: INFO: node-exporter-5lghf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:06:43.669: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:06:43.669: INFO: Container node-exporter ready: true, restart count 0 May 4 16:06:43.669: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x started at 2021-05-04 14:59:02 +0000 UTC (0+2 container statuses recorded) May 4 16:06:43.669: INFO: Container tas-controller ready: true, restart count 0 May 4 16:06:43.669: INFO: Container tas-extender ready: true, restart count 0 May 4 16:06:43.669: INFO: test-pod started at 2021-05-04 16:03:57 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.669: INFO: Container webserver ready: true, restart count 0 May 4 16:06:43.669: INFO: var-expansion-9794f91c-182f-43d1-82cd-33a014f1fbe9 started at 2021-05-04 16:06:42 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.669: INFO: Container dapi-container ready: false, restart count 0 May 4 16:06:43.669: INFO: liveness-exec started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.669: INFO: Container liveness-exec ready: true, restart count 5 May 4 16:06:43.669: INFO: kube-multus-ds-amd64-7r2s4 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.669: INFO: Container kube-multus ready: true, restart count 1 May 4 16:06:43.669: INFO: ss2-1 started at 2021-05-04 16:06:31 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.669: INFO: Container webserver ready: true, restart count 0 May 4 16:06:43.669: INFO: sample-webhook-deployment-cbccbf6bb-sfsl6 started at 2021-05-04 16:06:38 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.669: INFO: Container sample-webhook ready: true, restart count 0 May 4 16:06:43.669: INFO: pod-configmaps-0c6ac26d-fe8f-401e-99f6-8646ac965e19 started at 2021-05-04 16:06:17 +0000 UTC (0+3 container statuses recorded) May 4 16:06:43.669: INFO: Container createcm-volume-test ready: true, restart count 0 May 4 16:06:43.669: INFO: Container delcm-volume-test ready: true, restart count 0 May 4 16:06:43.669: INFO: Container updcm-volume-test ready: true, restart count 0 May 4 16:06:43.669: INFO: cmk-init-discover-node2-zlxzj started at 2021-05-04 14:54:52 +0000 UTC (0+3 container statuses recorded) May 4 16:06:43.669: INFO: Container discover ready: false, restart count 0 May 4 16:06:43.669: INFO: Container init ready: false, restart count 0 May 4 16:06:43.669: INFO: Container install ready: false, restart count 0 May 4 16:06:43.669: INFO: collectd-dhwfp started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:06:43.669: INFO: Container collectd ready: true, restart count 0 May 4 16:06:43.669: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:06:43.669: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:06:43.669: INFO: nodeport-test-qt2pt started at 2021-05-04 16:06:34 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.669: INFO: Container nodeport-test ready: true, restart count 0 May 4 16:06:43.669: INFO: agnhost-primary-kkgfd started at 2021-05-04 16:06:36 +0000 UTC (0+1 container statuses recorded) May 4 16:06:43.669: INFO: Container agnhost-primary ready: true, restart count 0 W0504 16:06:43.680930 24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:06:43.769: INFO: Latency metrics for node node2 May 4 16:06:43.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6243" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • Failure [133.811 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:06:43.351: Unexpected error: <*errors.errorString | 0xc004686ae0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31847 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31847 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1760 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":4,"skipped":53,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:06:43.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 4 16:06:43.862: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9c0f75b5-741a-46d8-8cc3-02bf38aa03aa" in namespace "downward-api-6806" to be "Succeeded or Failed" May 4 16:06:43.864: INFO: Pod "downwardapi-volume-9c0f75b5-741a-46d8-8cc3-02bf38aa03aa": Phase="Pending", Reason="", readiness=false. Elapsed: 1.92041ms May 4 16:06:45.867: INFO: Pod "downwardapi-volume-9c0f75b5-741a-46d8-8cc3-02bf38aa03aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00483259s May 4 16:06:47.870: INFO: Pod "downwardapi-volume-9c0f75b5-741a-46d8-8cc3-02bf38aa03aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00795595s STEP: Saw pod success May 4 16:06:47.870: INFO: Pod "downwardapi-volume-9c0f75b5-741a-46d8-8cc3-02bf38aa03aa" satisfied condition "Succeeded or Failed" May 4 16:06:47.872: INFO: Trying to get logs from node node2 pod downwardapi-volume-9c0f75b5-741a-46d8-8cc3-02bf38aa03aa container client-container: STEP: delete the pod May 4 16:06:47.884: INFO: Waiting for pod downwardapi-volume-9c0f75b5-741a-46d8-8cc3-02bf38aa03aa to disappear May 4 16:06:47.886: INFO: Pod downwardapi-volume-9c0f75b5-741a-46d8-8cc3-02bf38aa03aa no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:06:47.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6806" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":77,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:06:37.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 4 16:06:38.230: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 4 16:06:40.237: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741198, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741198, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741198, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741198, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 16:06:42.241: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741198, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741198, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741198, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741198, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 4 16:06:45.247: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:06:45.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7985-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:06:51.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8862" for this suite. STEP: Destroying namespace "webhook-8862-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.488 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":17,"skipped":288,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:06:47.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:06:52.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4357" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":308,"failed":0} [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:06:34.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2727.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2727.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2727.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2727.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 4 16:06:40.220: INFO: DNS probes using dns-test-49a612cc-c21a-40bc-b41e-53da3ea8c761 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2727.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2727.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2727.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2727.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 4 16:06:46.260: INFO: DNS probes using dns-test-03046a35-bcfd-49ea-9f3c-cf4672294d86 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2727.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-2727.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2727.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-2727.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 4 16:06:52.302: INFO: DNS probes using dns-test-631f20cc-9386-42f3-bf62-1aaa1bafc3bd succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:06:52.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2727" for this suite. • [SLOW TEST:18.158 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":16,"skipped":308,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:06:37.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD May 4 16:06:37.379: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:06:59.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5761" for this suite. • [SLOW TEST:21.945 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":21,"skipped":243,"failed":0} S ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:06:51.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-1052 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-1052 I0504 16:06:51.486020 30 runners.go:190] Created replication controller with name: externalname-service, namespace: services-1052, replica count: 2 I0504 16:06:54.536474 30 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0504 16:06:57.536880 30 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 4 16:06:57.536: INFO: Creating new exec pod May 4 16:07:02.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1052 exec execpodkrk9h -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 4 16:07:02.833: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" May 4 16:07:02.833: INFO: stdout: "" May 4 16:07:02.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1052 exec execpodkrk9h -- /bin/sh -x -c nc -zv -t -w 2 10.233.0.197 80' May 4 16:07:03.089: INFO: stderr: "+ nc -zv -t -w 2 10.233.0.197 80\nConnection to 10.233.0.197 80 port [tcp/http] succeeded!\n" May 4 16:07:03.089: INFO: stdout: "" May 4 16:07:03.089: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:07:03.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1052" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:11.671 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":18,"skipped":295,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:06:59.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:07:03.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1216" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":244,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:07:03.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:07:03.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1968" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • ------------------------------ {"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":-1,"completed":23,"skipped":268,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:06:52.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 4 16:06:52.837: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 4 16:06:54.845: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741212, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741212, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741212, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741212, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 4 16:06:57.854: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:06:57.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:07:03.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8188" for this suite. STEP: Destroying namespace "webhook-8188-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.637 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":17,"skipped":314,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:07:03.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1512 [It] should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 4 16:07:04.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7367 run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine' May 4 16:07:04.173: INFO: stderr: "" May 4 16:07:04.174: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 May 4 16:07:04.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7367 delete pods e2e-test-httpd-pod' May 4 16:07:04.337: INFO: stderr: "" May 4 16:07:04.337: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:07:04.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7367" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":-1,"completed":18,"skipped":321,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:07:04.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events May 4 16:07:04.410: INFO: created test-event-1 May 4 16:07:04.412: INFO: created test-event-2 May 4 16:07:04.415: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events May 4 16:07:04.418: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity May 4 16:07:04.430: INFO: requesting list of events to confirm quantity [AfterEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:07:04.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4179" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":-1,"completed":19,"skipped":340,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:07:03.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 4 16:07:10.200: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:07:11.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9260" for this suite. • [SLOW TEST:8.082 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":19,"skipped":309,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:07:03.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-817e815b-a83b-4790-a86a-ff7d4f4eb656 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:07:13.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5856" for this suite. • [SLOW TEST:10.070 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":298,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:07:13.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:07:20.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9801" for this suite. • [SLOW TEST:7.044 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":25,"skipped":314,"failed":0} S ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:07:20.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:07:20.704: INFO: Creating ReplicaSet my-hostname-basic-7e790f56-8bf0-4603-8d98-4de135b487a8 May 4 16:07:20.713: INFO: Pod name my-hostname-basic-7e790f56-8bf0-4603-8d98-4de135b487a8: Found 0 pods out of 1 May 4 16:07:25.717: INFO: Pod name my-hostname-basic-7e790f56-8bf0-4603-8d98-4de135b487a8: Found 1 pods out of 1 May 4 16:07:25.717: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-7e790f56-8bf0-4603-8d98-4de135b487a8" is running May 4 16:07:29.722: INFO: Pod "my-hostname-basic-7e790f56-8bf0-4603-8d98-4de135b487a8-qtncs" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-05-04 16:07:20 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-05-04 16:07:20 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-7e790f56-8bf0-4603-8d98-4de135b487a8]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-05-04 16:07:20 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-7e790f56-8bf0-4603-8d98-4de135b487a8]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-05-04 16:07:20 +0000 UTC Reason: Message:}]) May 4 16:07:29.723: INFO: Trying to dial the pod May 4 16:07:34.736: INFO: Controller my-hostname-basic-7e790f56-8bf0-4603-8d98-4de135b487a8: Got expected result from replica 1 [my-hostname-basic-7e790f56-8bf0-4603-8d98-4de135b487a8-qtncs]: "my-hostname-basic-7e790f56-8bf0-4603-8d98-4de135b487a8-qtncs", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:07:34.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2206" for this suite. • [SLOW TEST:14.062 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":26,"skipped":315,"failed":0} SSS ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":125,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:06:52.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 4 16:06:52.080: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4394 /api/v1/namespaces/watch-4394/configmaps/e2e-watch-test-configmap-a fcaa395a-d796-43b8-9c86-cac045c29e49 27709 0 2021-05-04 16:06:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-05-04 16:06:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 4 16:06:52.080: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4394 /api/v1/namespaces/watch-4394/configmaps/e2e-watch-test-configmap-a fcaa395a-d796-43b8-9c86-cac045c29e49 27709 0 2021-05-04 16:06:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-05-04 16:06:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 4 16:07:02.087: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4394 /api/v1/namespaces/watch-4394/configmaps/e2e-watch-test-configmap-a fcaa395a-d796-43b8-9c86-cac045c29e49 27981 0 2021-05-04 16:06:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-05-04 16:07:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 4 16:07:02.088: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4394 /api/v1/namespaces/watch-4394/configmaps/e2e-watch-test-configmap-a fcaa395a-d796-43b8-9c86-cac045c29e49 27981 0 2021-05-04 16:06:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-05-04 16:07:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 4 16:07:12.098: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4394 /api/v1/namespaces/watch-4394/configmaps/e2e-watch-test-configmap-a fcaa395a-d796-43b8-9c86-cac045c29e49 28366 0 2021-05-04 16:06:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-05-04 16:07:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 4 16:07:12.098: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4394 /api/v1/namespaces/watch-4394/configmaps/e2e-watch-test-configmap-a fcaa395a-d796-43b8-9c86-cac045c29e49 28366 0 2021-05-04 16:06:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-05-04 16:07:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 4 16:07:22.107: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4394 /api/v1/namespaces/watch-4394/configmaps/e2e-watch-test-configmap-a fcaa395a-d796-43b8-9c86-cac045c29e49 28510 0 2021-05-04 16:06:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-05-04 16:07:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 4 16:07:22.108: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4394 /api/v1/namespaces/watch-4394/configmaps/e2e-watch-test-configmap-a fcaa395a-d796-43b8-9c86-cac045c29e49 28510 0 2021-05-04 16:06:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-05-04 16:07:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 4 16:07:32.114: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4394 /api/v1/namespaces/watch-4394/configmaps/e2e-watch-test-configmap-b c17df1d6-99eb-46a0-bd0c-3fa8bc43da48 28591 0 2021-05-04 16:07:32 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-05-04 16:07:32 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 4 16:07:32.114: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4394 /api/v1/namespaces/watch-4394/configmaps/e2e-watch-test-configmap-b c17df1d6-99eb-46a0-bd0c-3fa8bc43da48 28591 0 2021-05-04 16:07:32 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-05-04 16:07:32 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 4 16:07:42.122: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4394 /api/v1/namespaces/watch-4394/configmaps/e2e-watch-test-configmap-b c17df1d6-99eb-46a0-bd0c-3fa8bc43da48 28710 0 2021-05-04 16:07:32 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-05-04 16:07:32 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 4 16:07:42.122: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4394 /api/v1/namespaces/watch-4394/configmaps/e2e-watch-test-configmap-b c17df1d6-99eb-46a0-bd0c-3fa8bc43da48 28710 0 2021-05-04 16:07:32 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-05-04 16:07:32 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:07:52.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4394" for this suite. • [SLOW TEST:60.080 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":7,"skipped":125,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:07:34.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 4 16:07:35.025: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 4 16:07:37.035: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741255, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741255, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741255, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741255, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 16:07:39.040: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741255, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741255, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741255, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741255, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 16:07:41.039: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741255, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741255, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741255, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741255, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 4 16:07:44.044: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:07:56.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3492" for this suite. STEP: Destroying namespace "webhook-3492-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:21.412 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":27,"skipped":318,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:07:52.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 4 16:07:56.274: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:07:56.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1376" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":164,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:07:04.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: Gathering metrics W0504 16:07:10.550499 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:08:12.566: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:08:12.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2730" for this suite. • [SLOW TEST:68.080 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":20,"skipped":370,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:08:12.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium May 4 16:08:12.699: INFO: Waiting up to 5m0s for pod "pod-67d586f6-95bb-4864-9e3c-21d42d2301f6" in namespace "emptydir-3965" to be "Succeeded or Failed" May 4 16:08:12.704: INFO: Pod "pod-67d586f6-95bb-4864-9e3c-21d42d2301f6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.398087ms May 4 16:08:14.707: INFO: Pod "pod-67d586f6-95bb-4864-9e3c-21d42d2301f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007454997s May 4 16:08:16.711: INFO: Pod "pod-67d586f6-95bb-4864-9e3c-21d42d2301f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011611245s STEP: Saw pod success May 4 16:08:16.711: INFO: Pod "pod-67d586f6-95bb-4864-9e3c-21d42d2301f6" satisfied condition "Succeeded or Failed" May 4 16:08:16.713: INFO: Trying to get logs from node node2 pod pod-67d586f6-95bb-4864-9e3c-21d42d2301f6 container test-container: STEP: delete the pod May 4 16:08:16.729: INFO: Waiting for pod pod-67d586f6-95bb-4864-9e3c-21d42d2301f6 to disappear May 4 16:08:16.731: INFO: Pod pod-67d586f6-95bb-4864-9e3c-21d42d2301f6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:08:16.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3965" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":399,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:07:56.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-ss48 STEP: Creating a pod to test atomic-volume-subpath May 4 16:07:56.336: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-ss48" in namespace "subpath-6251" to be "Succeeded or Failed" May 4 16:07:56.338: INFO: Pod "pod-subpath-test-downwardapi-ss48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150102ms May 4 16:07:58.341: INFO: Pod "pod-subpath-test-downwardapi-ss48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005242554s May 4 16:08:00.344: INFO: Pod "pod-subpath-test-downwardapi-ss48": Phase="Running", Reason="", readiness=true. Elapsed: 4.008284472s May 4 16:08:02.347: INFO: Pod "pod-subpath-test-downwardapi-ss48": Phase="Running", Reason="", readiness=true. Elapsed: 6.011282325s May 4 16:08:04.352: INFO: Pod "pod-subpath-test-downwardapi-ss48": Phase="Running", Reason="", readiness=true. Elapsed: 8.01605408s May 4 16:08:06.356: INFO: Pod "pod-subpath-test-downwardapi-ss48": Phase="Running", Reason="", readiness=true. Elapsed: 10.020061518s May 4 16:08:08.360: INFO: Pod "pod-subpath-test-downwardapi-ss48": Phase="Running", Reason="", readiness=true. Elapsed: 12.024066551s May 4 16:08:10.365: INFO: Pod "pod-subpath-test-downwardapi-ss48": Phase="Running", Reason="", readiness=true. Elapsed: 14.028552211s May 4 16:08:12.369: INFO: Pod "pod-subpath-test-downwardapi-ss48": Phase="Running", Reason="", readiness=true. Elapsed: 16.032915764s May 4 16:08:14.374: INFO: Pod "pod-subpath-test-downwardapi-ss48": Phase="Running", Reason="", readiness=true. Elapsed: 18.038035866s May 4 16:08:16.377: INFO: Pod "pod-subpath-test-downwardapi-ss48": Phase="Running", Reason="", readiness=true. Elapsed: 20.041393294s May 4 16:08:18.382: INFO: Pod "pod-subpath-test-downwardapi-ss48": Phase="Running", Reason="", readiness=true. Elapsed: 22.045764468s May 4 16:08:20.385: INFO: Pod "pod-subpath-test-downwardapi-ss48": Phase="Running", Reason="", readiness=true. Elapsed: 24.048646342s May 4 16:08:22.389: INFO: Pod "pod-subpath-test-downwardapi-ss48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.052545622s STEP: Saw pod success May 4 16:08:22.389: INFO: Pod "pod-subpath-test-downwardapi-ss48" satisfied condition "Succeeded or Failed" May 4 16:08:22.391: INFO: Trying to get logs from node node2 pod pod-subpath-test-downwardapi-ss48 container test-container-subpath-downwardapi-ss48: STEP: delete the pod May 4 16:08:22.409: INFO: Waiting for pod pod-subpath-test-downwardapi-ss48 to disappear May 4 16:08:22.411: INFO: Pod pod-subpath-test-downwardapi-ss48 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-ss48 May 4 16:08:22.411: INFO: Deleting pod "pod-subpath-test-downwardapi-ss48" in namespace "subpath-6251" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:08:22.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6251" for this suite. • [SLOW TEST:26.122 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":9,"skipped":166,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:08:22.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:08:22.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5369" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":-1,"completed":10,"skipped":172,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:07:11.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0504 16:07:21.284556 30 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:08:23.298: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:08:23.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-233" for this suite. • [SLOW TEST:72.074 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":20,"skipped":311,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:08:16.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC May 4 16:08:16.810: INFO: namespace kubectl-3461 May 4 16:08:16.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3461 create -f -' May 4 16:08:17.143: INFO: stderr: "" May 4 16:08:17.143: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. May 4 16:08:18.147: INFO: Selector matched 1 pods for map[app:agnhost] May 4 16:08:18.147: INFO: Found 0 / 1 May 4 16:08:19.146: INFO: Selector matched 1 pods for map[app:agnhost] May 4 16:08:19.146: INFO: Found 0 / 1 May 4 16:08:20.146: INFO: Selector matched 1 pods for map[app:agnhost] May 4 16:08:20.147: INFO: Found 1 / 1 May 4 16:08:20.147: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 4 16:08:20.150: INFO: Selector matched 1 pods for map[app:agnhost] May 4 16:08:20.151: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 4 16:08:20.151: INFO: wait on agnhost-primary startup in kubectl-3461 May 4 16:08:20.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3461 logs agnhost-primary-dntjd agnhost-primary' May 4 16:08:20.318: INFO: stderr: "" May 4 16:08:20.318: INFO: stdout: "Paused\n" STEP: exposing RC May 4 16:08:20.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3461 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' May 4 16:08:20.507: INFO: stderr: "" May 4 16:08:20.508: INFO: stdout: "service/rm2 exposed\n" May 4 16:08:20.510: INFO: Service rm2 in namespace kubectl-3461 found. STEP: exposing service May 4 16:08:22.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3461 expose service rm2 --name=rm3 --port=2345 --target-port=6379' May 4 16:08:22.689: INFO: stderr: "" May 4 16:08:22.689: INFO: stdout: "service/rm3 exposed\n" May 4 16:08:22.691: INFO: Service rm3 in namespace kubectl-3461 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:08:24.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3461" for this suite. • [SLOW TEST:7.938 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1222 should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":-1,"completed":22,"skipped":414,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:08:24.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-7829302d-60bd-4ab9-8562-cc8dd4188b9f STEP: Creating a pod to test consume secrets May 4 16:08:24.962: INFO: Waiting up to 5m0s for pod "pod-secrets-557ee10e-b0d0-40b5-9b04-da810c447310" in namespace "secrets-9306" to be "Succeeded or Failed" May 4 16:08:24.964: INFO: Pod "pod-secrets-557ee10e-b0d0-40b5-9b04-da810c447310": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290369ms May 4 16:08:26.967: INFO: Pod "pod-secrets-557ee10e-b0d0-40b5-9b04-da810c447310": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004946084s May 4 16:08:28.970: INFO: Pod "pod-secrets-557ee10e-b0d0-40b5-9b04-da810c447310": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008452161s STEP: Saw pod success May 4 16:08:28.970: INFO: Pod "pod-secrets-557ee10e-b0d0-40b5-9b04-da810c447310" satisfied condition "Succeeded or Failed" May 4 16:08:28.972: INFO: Trying to get logs from node node2 pod pod-secrets-557ee10e-b0d0-40b5-9b04-da810c447310 container secret-volume-test: STEP: delete the pod May 4 16:08:28.987: INFO: Waiting for pod pod-secrets-557ee10e-b0d0-40b5-9b04-da810c447310 to disappear May 4 16:08:28.988: INFO: Pod pod-secrets-557ee10e-b0d0-40b5-9b04-da810c447310 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:08:28.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9306" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":443,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:08:29.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 4 16:08:29.047: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3243 /api/v1/namespaces/watch-3243/configmaps/e2e-watch-test-resource-version 34df1a76-b763-4585-a58c-621a2443cc85 29390 0 2021-05-04 16:08:29 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-05-04 16:08:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 4 16:08:29.048: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3243 /api/v1/namespaces/watch-3243/configmaps/e2e-watch-test-resource-version 34df1a76-b763-4585-a58c-621a2443cc85 29391 0 2021-05-04 16:08:29 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-05-04 16:08:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:08:29.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3243" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":24,"skipped":447,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:08:29.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange May 4 16:08:29.141: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values May 4 16:08:29.147: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 4 16:08:29.147: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange May 4 16:08:29.159: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 4 16:08:29.160: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange May 4 16:08:29.171: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] May 4 16:08:29.171: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted May 4 16:08:36.215: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:08:36.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-9102" for this suite. • [SLOW TEST:7.123 seconds] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":25,"skipped":474,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:08:22.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 4 16:08:22.869: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 4 16:08:24.875: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741302, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741302, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741302, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741302, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 4 16:08:27.885: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:08:37.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8225" for this suite. STEP: Destroying namespace "webhook-8225-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.452 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":11,"skipped":200,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:08:38.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-d47d10b8-aee6-4e83-ae78-8492624ed8f4 STEP: Creating a pod to test consume configMaps May 4 16:08:38.069: INFO: Waiting up to 5m0s for pod "pod-configmaps-2415f673-523b-4565-bcdc-27670ec70857" in namespace "configmap-4683" to be "Succeeded or Failed" May 4 16:08:38.073: INFO: Pod "pod-configmaps-2415f673-523b-4565-bcdc-27670ec70857": Phase="Pending", Reason="", readiness=false. Elapsed: 3.84031ms May 4 16:08:40.076: INFO: Pod "pod-configmaps-2415f673-523b-4565-bcdc-27670ec70857": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006741042s May 4 16:08:42.080: INFO: Pod "pod-configmaps-2415f673-523b-4565-bcdc-27670ec70857": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011143793s STEP: Saw pod success May 4 16:08:42.080: INFO: Pod "pod-configmaps-2415f673-523b-4565-bcdc-27670ec70857" satisfied condition "Succeeded or Failed" May 4 16:08:42.083: INFO: Trying to get logs from node node1 pod pod-configmaps-2415f673-523b-4565-bcdc-27670ec70857 container configmap-volume-test: STEP: delete the pod May 4 16:08:42.177: INFO: Waiting for pod pod-configmaps-2415f673-523b-4565-bcdc-27670ec70857 to disappear May 4 16:08:42.179: INFO: Pod pod-configmaps-2415f673-523b-4565-bcdc-27670ec70857 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:08:42.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4683" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":209,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:08:23.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:08:23.442: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 4 16:08:28.445: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 4 16:08:28.445: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 4 16:08:30.448: INFO: Creating deployment "test-rollover-deployment" May 4 16:08:30.454: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 4 16:08:32.459: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 4 16:08:32.465: INFO: Ensure that both replica sets have 1 created replica May 4 16:08:32.474: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 4 16:08:32.485: INFO: Updating deployment test-rollover-deployment May 4 16:08:32.485: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 4 16:08:34.490: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 4 16:08:34.496: INFO: Make sure deployment "test-rollover-deployment" is complete May 4 16:08:34.502: INFO: all replica sets need to contain the pod-template-hash label May 4 16:08:34.502: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741310, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741310, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741312, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741310, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 16:08:36.507: INFO: all replica sets need to contain the pod-template-hash label May 4 16:08:36.508: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741310, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741310, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741315, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741310, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 16:08:38.507: INFO: all replica sets need to contain the pod-template-hash label May 4 16:08:38.507: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741310, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741310, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741315, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741310, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 16:08:40.509: INFO: all replica sets need to contain the pod-template-hash label May 4 16:08:40.509: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741310, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741310, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741315, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741310, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 16:08:42.508: INFO: all replica sets need to contain the pod-template-hash label May 4 16:08:42.508: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741310, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741310, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741315, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741310, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 16:08:44.510: INFO: all replica sets need to contain the pod-template-hash label May 4 16:08:44.510: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741310, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741310, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741315, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741310, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 16:08:46.509: INFO: May 4 16:08:46.509: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 May 4 16:08:46.516: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-6247 /apis/apps/v1/namespaces/deployment-6247/deployments/test-rollover-deployment 28eb641d-d709-4b2a-82b5-cc42478e3349 29783 2 2021-05-04 16:08:30 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-05-04 16:08:32 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-05-04 16:08:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00565d888 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-05-04 16:08:30 +0000 UTC,LastTransitionTime:2021-05-04 16:08:30 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-5797c7764" has successfully progressed.,LastUpdateTime:2021-05-04 16:08:45 +0000 UTC,LastTransitionTime:2021-05-04 16:08:30 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 4 16:08:46.519: INFO: New ReplicaSet "test-rollover-deployment-5797c7764" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-5797c7764 deployment-6247 /apis/apps/v1/namespaces/deployment-6247/replicasets/test-rollover-deployment-5797c7764 86d22ffa-79eb-4fb8-8d5e-3d81a251f0c3 29773 2 2021-05-04 16:08:32 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 28eb641d-d709-4b2a-82b5-cc42478e3349 0xc00565dd90 0xc00565dd91}] [] [{kube-controller-manager Update apps/v1 2021-05-04 16:08:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28eb641d-d709-4b2a-82b5-cc42478e3349\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5797c7764,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00565de08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 4 16:08:46.519: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 4 16:08:46.519: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-6247 /apis/apps/v1/namespaces/deployment-6247/replicasets/test-rollover-controller e91645d9-3b85-4a57-9382-cbed58d7cbfc 29781 2 2021-05-04 16:08:23 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 28eb641d-d709-4b2a-82b5-cc42478e3349 0xc00565dc87 0xc00565dc88}] [] [{e2e.test Update apps/v1 2021-05-04 16:08:23 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-05-04 16:08:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28eb641d-d709-4b2a-82b5-cc42478e3349\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00565dd28 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 4 16:08:46.519: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-6247 /apis/apps/v1/namespaces/deployment-6247/replicasets/test-rollover-deployment-78bc8b888c ddd02ee3-1d64-4311-9417-199bb35c0093 29498 2 2021-05-04 16:08:30 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 28eb641d-d709-4b2a-82b5-cc42478e3349 0xc00565de77 0xc00565de78}] [] [{kube-controller-manager Update apps/v1 2021-05-04 16:08:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28eb641d-d709-4b2a-82b5-cc42478e3349\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00565df08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 4 16:08:46.522: INFO: Pod "test-rollover-deployment-5797c7764-xldpw" is available: &Pod{ObjectMeta:{test-rollover-deployment-5797c7764-xldpw test-rollover-deployment-5797c7764- deployment-6247 /api/v1/namespaces/deployment-6247/pods/test-rollover-deployment-5797c7764-xldpw 5ebac49b-3192-4a99-893b-3f2ca3e8f0e2 29574 0 2021-05-04 16:08:32 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.152" ], "mac": "e2:bc:06:9d:47:9a", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.152" ], "mac": "e2:bc:06:9d:47:9a", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-rollover-deployment-5797c7764 86d22ffa-79eb-4fb8-8d5e-3d81a251f0c3 0xc002be89bf 0xc002be89d0}] [] [{kube-controller-manager Update v1 2021-05-04 16:08:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"86d22ffa-79eb-4fb8-8d5e-3d81a251f0c3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-04 16:08:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-04 16:08:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.152\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-swmbl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-swmbl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-swmbl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:08:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:08:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:08:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:08:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.3.152,StartTime:2021-05-04 16:08:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-04 16:08:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:docker://b0755ca8d013db96dc892058ac7d07e4482255e5cb8e5e17799876ec1ef956af,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.152,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:08:46.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6247" for this suite. • [SLOW TEST:23.117 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":21,"skipped":363,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:06:34.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-6457 STEP: creating replication controller nodeport-test in namespace services-6457 I0504 16:06:34.517725 27 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-6457, replica count: 2 I0504 16:06:37.568828 27 runners.go:190] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0504 16:06:40.570102 27 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 4 16:06:40.570: INFO: Creating new exec pod May 4 16:06:45.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 4 16:06:45.867: INFO: stderr: "+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" May 4 16:06:45.867: INFO: stdout: "" May 4 16:06:45.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.233.9.225 80' May 4 16:06:46.114: INFO: stderr: "+ nc -zv -t -w 2 10.233.9.225 80\nConnection to 10.233.9.225 80 port [tcp/http] succeeded!\n" May 4 16:06:46.114: INFO: stdout: "" May 4 16:06:46.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:06:46.366: INFO: rc: 1 May 4 16:06:46.366: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:47.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:06:48.007: INFO: rc: 1 May 4 16:06:48.007: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:48.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:06:48.820: INFO: rc: 1 May 4 16:06:48.820: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:49.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:06:49.615: INFO: rc: 1 May 4 16:06:49.615: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:50.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:06:50.615: INFO: rc: 1 May 4 16:06:50.615: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:51.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:06:51.686: INFO: rc: 1 May 4 16:06:51.686: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:52.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:06:52.628: INFO: rc: 1 May 4 16:06:52.628: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:53.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:06:53.622: INFO: rc: 1 May 4 16:06:53.622: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:54.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:06:54.829: INFO: rc: 1 May 4 16:06:54.829: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:55.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:06:55.867: INFO: rc: 1 May 4 16:06:55.867: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:56.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:06:56.609: INFO: rc: 1 May 4 16:06:56.610: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:57.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:06:57.625: INFO: rc: 1 May 4 16:06:57.626: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:58.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:06:58.846: INFO: rc: 1 May 4 16:06:58.847: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:06:59.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:06:59.631: INFO: rc: 1 May 4 16:06:59.631: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:00.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:00.620: INFO: rc: 1 May 4 16:07:00.621: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:01.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:01.631: INFO: rc: 1 May 4 16:07:01.631: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:02.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:02.624: INFO: rc: 1 May 4 16:07:02.624: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:03.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:03.634: INFO: rc: 1 May 4 16:07:03.634: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:04.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:05.309: INFO: rc: 1 May 4 16:07:05.309: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:05.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:05.833: INFO: rc: 1 May 4 16:07:05.833: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:06.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:07.300: INFO: rc: 1 May 4 16:07:07.300: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:07.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:07.720: INFO: rc: 1 May 4 16:07:07.720: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:08.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:08.994: INFO: rc: 1 May 4 16:07:08.994: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:09.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:09.815: INFO: rc: 1 May 4 16:07:09.815: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:10.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:11.331: INFO: rc: 1 May 4 16:07:11.331: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:11.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:11.756: INFO: rc: 1 May 4 16:07:11.757: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:12.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:12.849: INFO: rc: 1 May 4 16:07:12.849: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:13.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:13.750: INFO: rc: 1 May 4 16:07:13.750: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:14.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:15.098: INFO: rc: 1 May 4 16:07:15.098: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:15.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:15.617: INFO: rc: 1 May 4 16:07:15.618: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:16.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:16.695: INFO: rc: 1 May 4 16:07:16.695: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:17.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:17.630: INFO: rc: 1 May 4 16:07:17.630: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:18.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:18.617: INFO: rc: 1 May 4 16:07:18.617: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:19.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:19.637: INFO: rc: 1 May 4 16:07:19.637: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:20.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:20.652: INFO: rc: 1 May 4 16:07:20.652: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:21.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:21.753: INFO: rc: 1 May 4 16:07:21.753: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:22.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:22.630: INFO: rc: 1 May 4 16:07:22.631: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:23.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:23.640: INFO: rc: 1 May 4 16:07:23.640: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:24.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:24.968: INFO: rc: 1 May 4 16:07:24.968: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:25.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:25.725: INFO: rc: 1 May 4 16:07:25.725: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:26.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:26.659: INFO: rc: 1 May 4 16:07:26.659: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:27.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:27.783: INFO: rc: 1 May 4 16:07:27.783: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:28.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:28.653: INFO: rc: 1 May 4 16:07:28.653: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:29.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:29.628: INFO: rc: 1 May 4 16:07:29.628: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:30.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:30.648: INFO: rc: 1 May 4 16:07:30.648: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:31.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:31.631: INFO: rc: 1 May 4 16:07:31.631: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:32.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:32.645: INFO: rc: 1 May 4 16:07:32.645: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:33.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:33.647: INFO: rc: 1 May 4 16:07:33.647: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:34.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:34.624: INFO: rc: 1 May 4 16:07:34.624: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:35.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:35.645: INFO: rc: 1 May 4 16:07:35.645: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:36.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:36.647: INFO: rc: 1 May 4 16:07:36.647: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:37.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:37.629: INFO: rc: 1 May 4 16:07:37.629: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:38.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:38.627: INFO: rc: 1 May 4 16:07:38.627: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:39.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:39.636: INFO: rc: 1 May 4 16:07:39.636: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:40.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:40.638: INFO: rc: 1 May 4 16:07:40.638: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:41.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:41.621: INFO: rc: 1 May 4 16:07:41.621: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:42.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:42.703: INFO: rc: 1 May 4 16:07:42.703: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:43.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:43.638: INFO: rc: 1 May 4 16:07:43.638: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:44.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:44.626: INFO: rc: 1 May 4 16:07:44.626: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:45.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:45.622: INFO: rc: 1 May 4 16:07:45.622: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:46.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:46.642: INFO: rc: 1 May 4 16:07:46.642: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:47.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:47.620: INFO: rc: 1 May 4 16:07:47.620: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:48.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:48.601: INFO: rc: 1 May 4 16:07:48.602: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:49.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:49.658: INFO: rc: 1 May 4 16:07:49.658: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:50.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:50.588: INFO: rc: 1 May 4 16:07:50.589: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:51.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:51.655: INFO: rc: 1 May 4 16:07:51.655: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:52.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:52.644: INFO: rc: 1 May 4 16:07:52.644: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:53.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:53.800: INFO: rc: 1 May 4 16:07:53.800: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:54.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:54.723: INFO: rc: 1 May 4 16:07:54.723: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:55.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:55.631: INFO: rc: 1 May 4 16:07:55.631: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:56.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:56.627: INFO: rc: 1 May 4 16:07:56.627: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:57.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:57.607: INFO: rc: 1 May 4 16:07:57.607: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:58.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:58.630: INFO: rc: 1 May 4 16:07:58.630: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:07:59.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:07:59.611: INFO: rc: 1 May 4 16:07:59.612: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:00.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:00.609: INFO: rc: 1 May 4 16:08:00.609: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:01.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:01.617: INFO: rc: 1 May 4 16:08:01.617: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:02.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:02.616: INFO: rc: 1 May 4 16:08:02.616: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:03.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:03.621: INFO: rc: 1 May 4 16:08:03.621: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:04.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:04.633: INFO: rc: 1 May 4 16:08:04.633: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:05.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:05.619: INFO: rc: 1 May 4 16:08:05.619: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:06.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:06.884: INFO: rc: 1 May 4 16:08:06.884: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:07.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:07.625: INFO: rc: 1 May 4 16:08:07.625: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:08.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:08.637: INFO: rc: 1 May 4 16:08:08.637: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:09.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:09.638: INFO: rc: 1 May 4 16:08:09.638: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:10.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:10.615: INFO: rc: 1 May 4 16:08:10.616: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:11.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:11.800: INFO: rc: 1 May 4 16:08:11.800: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:12.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:12.641: INFO: rc: 1 May 4 16:08:12.641: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:13.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:13.628: INFO: rc: 1 May 4 16:08:13.628: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:14.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:14.632: INFO: rc: 1 May 4 16:08:14.632: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:15.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:15.624: INFO: rc: 1 May 4 16:08:15.624: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:16.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:16.632: INFO: rc: 1 May 4 16:08:16.632: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:17.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:17.685: INFO: rc: 1 May 4 16:08:17.685: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:18.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:18.669: INFO: rc: 1 May 4 16:08:18.669: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:19.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:19.642: INFO: rc: 1 May 4 16:08:19.642: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:20.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:20.667: INFO: rc: 1 May 4 16:08:20.667: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:21.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:21.639: INFO: rc: 1 May 4 16:08:21.639: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:22.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:22.626: INFO: rc: 1 May 4 16:08:22.626: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:23.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:23.625: INFO: rc: 1 May 4 16:08:23.625: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:24.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:24.797: INFO: rc: 1 May 4 16:08:24.797: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:25.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:25.699: INFO: rc: 1 May 4 16:08:25.699: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:26.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:26.650: INFO: rc: 1 May 4 16:08:26.650: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:27.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:27.630: INFO: rc: 1 May 4 16:08:27.630: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:28.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:28.627: INFO: rc: 1 May 4 16:08:28.627: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:29.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:29.651: INFO: rc: 1 May 4 16:08:29.651: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:30.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:30.646: INFO: rc: 1 May 4 16:08:30.646: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:31.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:31.660: INFO: rc: 1 May 4 16:08:31.661: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:32.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:32.952: INFO: rc: 1 May 4 16:08:32.952: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:33.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:33.627: INFO: rc: 1 May 4 16:08:33.628: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:34.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:34.616: INFO: rc: 1 May 4 16:08:34.616: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:35.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:35.611: INFO: rc: 1 May 4 16:08:35.611: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:36.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:36.639: INFO: rc: 1 May 4 16:08:36.639: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:37.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:37.889: INFO: rc: 1 May 4 16:08:37.889: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:38.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:38.629: INFO: rc: 1 May 4 16:08:38.629: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:39.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:40.057: INFO: rc: 1 May 4 16:08:40.057: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:40.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:40.681: INFO: rc: 1 May 4 16:08:40.681: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:41.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:41.613: INFO: rc: 1 May 4 16:08:41.613: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:42.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:42.632: INFO: rc: 1 May 4 16:08:42.632: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:43.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:43.660: INFO: rc: 1 May 4 16:08:43.660: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:44.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:44.640: INFO: rc: 1 May 4 16:08:44.640: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:45.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:45.640: INFO: rc: 1 May 4 16:08:45.640: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:46.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:46.632: INFO: rc: 1 May 4 16:08:46.632: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:46.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103' May 4 16:08:46.900: INFO: rc: 1 May 4 16:08:46.900: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6457 exec execpodn9wsj -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32103: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 32103 nc: connect to 10.10.190.207 port 32103 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:46.900: FAIL: Unexpected error: <*errors.errorString | 0xc0050d09a0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32103 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32103 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.11() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1242 +0x265 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002947080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc002947080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc002947080, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "services-6457". STEP: Found 20 events. May 4 16:08:46.916: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpodn9wsj: { } Scheduled: Successfully assigned services-6457/execpodn9wsj to node1 May 4 16:08:46.916: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-test-lxckq: { } Scheduled: Successfully assigned services-6457/nodeport-test-lxckq to node1 May 4 16:08:46.916: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-test-qt2pt: { } Scheduled: Successfully assigned services-6457/nodeport-test-qt2pt to node2 May 4 16:08:46.916: INFO: At 2021-05-04 16:06:34 +0000 UTC - event for nodeport-test: {replication-controller } SuccessfulCreate: Created pod: nodeport-test-qt2pt May 4 16:08:46.916: INFO: At 2021-05-04 16:06:34 +0000 UTC - event for nodeport-test: {replication-controller } SuccessfulCreate: Created pod: nodeport-test-lxckq May 4 16:08:46.916: INFO: At 2021-05-04 16:06:35 +0000 UTC - event for nodeport-test-qt2pt: {multus } AddedInterface: Add eth0 [10.244.3.124/24] May 4 16:08:46.916: INFO: At 2021-05-04 16:06:35 +0000 UTC - event for nodeport-test-qt2pt: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.20" May 4 16:08:46.916: INFO: At 2021-05-04 16:06:36 +0000 UTC - event for nodeport-test-lxckq: {multus } AddedInterface: Add eth0 [10.244.4.95/24] May 4 16:08:46.916: INFO: At 2021-05-04 16:06:36 +0000 UTC - event for nodeport-test-lxckq: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.20" May 4 16:08:46.916: INFO: At 2021-05-04 16:06:36 +0000 UTC - event for nodeport-test-qt2pt: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.20" in 480.798163ms May 4 16:08:46.916: INFO: At 2021-05-04 16:06:36 +0000 UTC - event for nodeport-test-qt2pt: {kubelet node2} Created: Created container nodeport-test May 4 16:08:46.916: INFO: At 2021-05-04 16:06:36 +0000 UTC - event for nodeport-test-qt2pt: {kubelet node2} Started: Started container nodeport-test May 4 16:08:46.916: INFO: At 2021-05-04 16:06:37 +0000 UTC - event for nodeport-test-lxckq: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.20" in 470.078716ms May 4 16:08:46.916: INFO: At 2021-05-04 16:06:37 +0000 UTC - event for nodeport-test-lxckq: {kubelet node1} Started: Started container nodeport-test May 4 16:08:46.916: INFO: At 2021-05-04 16:06:37 +0000 UTC - event for nodeport-test-lxckq: {kubelet node1} Created: Created container nodeport-test May 4 16:08:46.916: INFO: At 2021-05-04 16:06:42 +0000 UTC - event for execpodn9wsj: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.20" May 4 16:08:46.916: INFO: At 2021-05-04 16:06:42 +0000 UTC - event for execpodn9wsj: {multus } AddedInterface: Add eth0 [10.244.4.98/24] May 4 16:08:46.916: INFO: At 2021-05-04 16:06:43 +0000 UTC - event for execpodn9wsj: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.20" in 664.132475ms May 4 16:08:46.916: INFO: At 2021-05-04 16:06:43 +0000 UTC - event for execpodn9wsj: {kubelet node1} Created: Created container agnhost-container May 4 16:08:46.916: INFO: At 2021-05-04 16:06:43 +0000 UTC - event for execpodn9wsj: {kubelet node1} Started: Started container agnhost-container May 4 16:08:46.919: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:08:46.919: INFO: execpodn9wsj node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:06:40 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:06:44 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:06:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:06:40 +0000 UTC }] May 4 16:08:46.919: INFO: nodeport-test-lxckq node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:06:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:06:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:06:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:06:34 +0000 UTC }] May 4 16:08:46.919: INFO: nodeport-test-qt2pt node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:06:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:06:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:06:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:06:34 +0000 UTC }] May 4 16:08:46.919: INFO: May 4 16:08:46.924: INFO: Logging node info for node master1 May 4 16:08:46.927: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 db982204-549e-4532-90a7-a4410878cfc9 29805 0 2021-05-04 14:43:01 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"3e:f0:43:cb:66:52"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-04 14:51:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:46 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:08:46 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:08:46 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:08:46 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:08:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:88a0771919594d4187f6704fc7592bf8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8e0a253b-2aa4-4467-879e-567e7ba1ffa4,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:08:46.928: INFO: Logging kubelet events for node master1 May 4 16:08:46.930: INFO: Logging pods the kubelet thinks is on node master1 May 4 16:08:46.956: INFO: node-exporter-jckjs started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:08:46.956: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:08:46.956: INFO: Container node-exporter ready: true, restart count 0 May 4 16:08:46.956: INFO: kube-scheduler-master1 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:08:46.956: INFO: Container kube-scheduler ready: true, restart count 0 May 4 16:08:46.956: INFO: kube-apiserver-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:08:46.956: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:08:46.956: INFO: kube-controller-manager-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:08:46.956: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:08:46.956: INFO: kube-proxy-8j6ch started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:08:46.956: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:08:46.956: INFO: docker-registry-docker-registry-56cbc7bc58-zhf8t started at 2021-05-04 14:48:42 +0000 UTC (0+2 container statuses recorded) May 4 16:08:46.956: INFO: Container docker-registry ready: true, restart count 0 May 4 16:08:46.956: INFO: Container nginx ready: true, restart count 0 May 4 16:08:46.956: INFO: node-feature-discovery-controller-5bf5c49849-72rn6 started at 2021-05-04 14:51:52 +0000 UTC (0+1 container statuses recorded) May 4 16:08:46.956: INFO: Container nfd-controller ready: true, restart count 0 May 4 16:08:46.956: INFO: kube-flannel-qspzk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:08:46.956: INFO: Init container install-cni ready: true, restart count 0 May 4 16:08:46.956: INFO: Container kube-flannel ready: true, restart count 3 May 4 16:08:46.956: INFO: kube-multus-ds-amd64-jflvf started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:08:46.956: INFO: Container kube-multus ready: true, restart count 1 May 4 16:08:46.956: INFO: coredns-7677f9bb54-qvcd2 started at 2021-05-04 14:46:11 +0000 UTC (0+1 container statuses recorded) May 4 16:08:46.956: INFO: Container coredns ready: true, restart count 1 W0504 16:08:46.967823 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:08:46.993: INFO: Latency metrics for node master1 May 4 16:08:46.993: INFO: Logging node info for node master2 May 4 16:08:46.995: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 e2c15170-247b-4e7b-b818-abc807948bf8 29788 0 2021-05-04 14:43:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:e0:10:a0:e0:62"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:32 +0000 UTC,LastTransitionTime:2021-05-04 14:47:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:08:46 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:08:46 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:08:46 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:08:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6af568f56589422a9bd68e0270ce0f8c,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:bf27bb77-fad2-4b52-85c3-acb5113fc512,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:08:46.996: INFO: Logging kubelet events for node master2 May 4 16:08:46.998: INFO: Logging pods the kubelet thinks is on node master2 May 4 16:08:47.013: INFO: node-exporter-9c6qf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:08:47.013: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:08:47.013: INFO: Container node-exporter ready: true, restart count 0 May 4 16:08:47.013: INFO: kube-apiserver-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.013: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:08:47.013: INFO: kube-controller-manager-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.013: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:08:47.013: INFO: kube-scheduler-master2 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.013: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:08:47.013: INFO: kube-proxy-6b5t8 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.013: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:08:47.013: INFO: kube-flannel-cxdfr started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:08:47.013: INFO: Init container install-cni ready: true, restart count 0 May 4 16:08:47.013: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:08:47.013: INFO: kube-multus-ds-amd64-dw8tg started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.013: INFO: Container kube-multus ready: true, restart count 1 May 4 16:08:47.013: INFO: dns-autoscaler-5b7b5c9b6f-zbrsq started at 2021-05-04 14:46:08 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.013: INFO: Container autoscaler ready: true, restart count 1 W0504 16:08:47.026350 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:08:47.054: INFO: Latency metrics for node master2 May 4 16:08:47.054: INFO: Logging node info for node master3 May 4 16:08:47.057: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 b533a646-667e-403c-944b-71dec9cc4851 29782 0 2021-05-04 14:43:51 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:b0:53:14:f6:c9"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:48:50 +0000 UTC,LastTransitionTime:2021-05-04 14:48:50 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:08:45 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:08:45 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:08:45 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:08:45 +0000 UTC,LastTransitionTime:2021-05-04 14:45:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:959373dcda56494486f0c2bb0bb496cc,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:0714ca81-c21e-40d6-a288-48d597238e54,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:08:47.057: INFO: Logging kubelet events for node master3 May 4 16:08:47.059: INFO: Logging pods the kubelet thinks is on node master3 May 4 16:08:47.075: INFO: kube-apiserver-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.075: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:08:47.075: INFO: kube-controller-manager-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.075: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:08:47.075: INFO: kube-scheduler-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.075: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:08:47.075: INFO: kube-proxy-2p5b6 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.075: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:08:47.075: INFO: kube-flannel-wznt8 started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:08:47.075: INFO: Init container install-cni ready: true, restart count 0 May 4 16:08:47.075: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:08:47.075: INFO: kube-multus-ds-amd64-cgwz2 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.075: INFO: Container kube-multus ready: true, restart count 1 May 4 16:08:47.075: INFO: coredns-7677f9bb54-pshfb started at 2021-05-04 14:46:06 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.075: INFO: Container coredns ready: true, restart count 1 May 4 16:08:47.075: INFO: node-exporter-wvppn started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:08:47.075: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:08:47.075: INFO: Container node-exporter ready: true, restart count 0 W0504 16:08:47.087346 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:08:47.116: INFO: Latency metrics for node master3 May 4 16:08:47.116: INFO: Logging node info for node node1 May 4 16:08:47.119: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 b8cf7e16-d5c7-4e2c-996a-93d93bd4fa1c 29806 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"e2:50:df:03:d2:13"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:54:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:08:46 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:08:46 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:08:46 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:08:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bffc023a4ab84df0b0181bc7b8f509e2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:dc08af72-abca-4f1d-bd0f-0e8d8eb97de5,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[@ :],SizeBytes:1002569035,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:371dc6bf7e0c7ce112a29341b000c40d840aef1dbb4fdcb3ae5c0597e28f3061 golang:alpine3.12],SizeBytes:301097267,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:08:47.120: INFO: Logging kubelet events for node node1 May 4 16:08:47.123: INFO: Logging pods the kubelet thinks is on node node1 May 4 16:08:47.140: INFO: nginx-proxy-node1 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.140: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:08:47.140: INFO: kube-proxy-t2mbn started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.140: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:08:47.140: INFO: liveness-http started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.140: INFO: Container liveness-http ready: true, restart count 15 May 4 16:08:47.140: INFO: cmk-init-discover-node1-m8vvw started at 2021-05-04 14:54:32 +0000 UTC (0+3 container statuses recorded) May 4 16:08:47.140: INFO: Container discover ready: false, restart count 0 May 4 16:08:47.140: INFO: Container init ready: false, restart count 0 May 4 16:08:47.140: INFO: Container install ready: false, restart count 0 May 4 16:08:47.140: INFO: affinity-nodeport-timeout-ksvxx started at 2021-05-04 16:08:04 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.140: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 May 4 16:08:47.140: INFO: execpod-affinityz74hp started at 2021-05-04 16:08:10 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.140: INFO: Container agnhost-container ready: true, restart count 0 May 4 16:08:47.140: INFO: kube-multus-ds-amd64-pkmbz started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.140: INFO: Container kube-multus ready: true, restart count 1 May 4 16:08:47.140: INFO: ss2-0 started at 2021-05-04 16:08:36 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.140: INFO: Container webserver ready: true, restart count 0 May 4 16:08:47.140: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.140: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 16:08:47.140: INFO: cmk-slg76 started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:08:47.140: INFO: Container nodereport ready: true, restart count 0 May 4 16:08:47.140: INFO: Container reconcile ready: true, restart count 0 May 4 16:08:47.140: INFO: prometheus-k8s-0 started at 2021-05-04 14:56:12 +0000 UTC (0+5 container statuses recorded) May 4 16:08:47.140: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:08:47.140: INFO: Container grafana ready: true, restart count 0 May 4 16:08:47.140: INFO: Container prometheus ready: true, restart count 1 May 4 16:08:47.140: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:08:47.140: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 4 16:08:47.140: INFO: nodeport-test-lxckq started at 2021-05-04 16:06:34 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.140: INFO: Container nodeport-test ready: true, restart count 0 May 4 16:08:47.140: INFO: execpodn9wsj started at 2021-05-04 16:06:40 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.140: INFO: Container agnhost-container ready: true, restart count 0 May 4 16:08:47.140: INFO: affinity-nodeport-timeout-ncbmb started at 2021-05-04 16:08:04 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.140: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 May 4 16:08:47.140: INFO: ss2-2 started at 2021-05-04 16:08:29 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.140: INFO: Container webserver ready: true, restart count 0 May 4 16:08:47.140: INFO: netserver-0 started at 2021-05-04 16:08:46 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.140: INFO: Container webserver ready: false, restart count 0 May 4 16:08:47.140: INFO: kube-flannel-d6pbl started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:08:47.140: INFO: Init container install-cni ready: true, restart count 2 May 4 16:08:47.140: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:08:47.140: INFO: node-feature-discovery-worker-wfgl5 started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.140: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:08:47.140: INFO: prometheus-operator-5bb8cb9d8f-rrrhf started at 2021-05-04 14:56:03 +0000 UTC (0+2 container statuses recorded) May 4 16:08:47.140: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:08:47.140: INFO: Container prometheus-operator ready: true, restart count 0 May 4 16:08:47.140: INFO: node-exporter-k8qd9 started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:08:47.140: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:08:47.140: INFO: Container node-exporter ready: true, restart count 0 May 4 16:08:47.140: INFO: collectd-4755t started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:08:47.140: INFO: Container collectd ready: true, restart count 0 May 4 16:08:47.140: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:08:47.140: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:08:47.140: INFO: pfpod started at 2021-05-04 16:08:31 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.140: INFO: Container pause ready: false, restart count 0 May 4 16:08:47.140: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.140: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:08:47.140: INFO: pod-adoption-release-9sdfn started at 2021-05-04 16:07:10 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.140: INFO: Container pod-adoption-release ready: true, restart count 0 W0504 16:08:47.154562 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:08:47.569: INFO: Latency metrics for node node1 May 4 16:08:47.569: INFO: Logging node info for node node2 May 4 16:08:47.572: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 41567fa7-bb24-4381-9387-e4115195037d 29767 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:98:f5:3b:98:5c"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:08:45 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:08:45 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:08:45 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:08:45 +0000 UTC,LastTransitionTime:2021-05-04 14:45:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d47a1c7ab17f44f2ae7ff788700a8d74,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:df3523a1-a74f-4f8b-beb1-29f5ed8699f3,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:08:47.573: INFO: Logging kubelet events for node node2 May 4 16:08:47.574: INFO: Logging pods the kubelet thinks is on node node2 May 4 16:08:47.590: INFO: ss2-0 started at 2021-05-04 16:07:49 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.590: INFO: Container webserver ready: false, restart count 0 May 4 16:08:47.590: INFO: ss2-1 started at 2021-05-04 16:08:40 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.590: INFO: Container webserver ready: true, restart count 0 May 4 16:08:47.590: INFO: kube-flannel-lnwkk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:08:47.590: INFO: Init container install-cni ready: true, restart count 2 May 4 16:08:47.590: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:08:47.590: INFO: cmk-webhook-6c9d5f8578-fr595 started at 2021-05-04 14:55:15 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.590: INFO: Container cmk-webhook ready: true, restart count 0 May 4 16:08:47.590: INFO: liveness-407c14ad-7a20-4e46-8321-7d673d64b89e started at 2021-05-04 16:06:36 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.590: INFO: Container liveness ready: false, restart count 4 May 4 16:08:47.590: INFO: ss2-2 started at 2021-05-04 16:08:47 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.590: INFO: Container webserver ready: false, restart count 0 May 4 16:08:47.590: INFO: test-pod started at 2021-05-04 16:03:57 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.590: INFO: Container webserver ready: true, restart count 0 May 4 16:08:47.590: INFO: liveness-exec started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.590: INFO: Container liveness-exec ready: false, restart count 5 May 4 16:08:47.590: INFO: affinity-nodeport-timeout-l62pm started at 2021-05-04 16:08:04 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.590: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 May 4 16:08:47.590: INFO: kube-proxy-rfjjf started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.590: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:08:47.590: INFO: node-feature-discovery-worker-jzjqs started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.590: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:08:47.590: INFO: node-exporter-5lghf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:08:47.590: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:08:47.590: INFO: Container node-exporter ready: true, restart count 0 May 4 16:08:47.590: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x started at 2021-05-04 14:59:02 +0000 UTC (0+2 container statuses recorded) May 4 16:08:47.590: INFO: Container tas-controller ready: true, restart count 0 May 4 16:08:47.590: INFO: Container tas-extender ready: true, restart count 0 May 4 16:08:47.590: INFO: kube-multus-ds-amd64-7r2s4 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.590: INFO: Container kube-multus ready: true, restart count 1 May 4 16:08:47.590: INFO: cmk-init-discover-node2-zlxzj started at 2021-05-04 14:54:52 +0000 UTC (0+3 container statuses recorded) May 4 16:08:47.590: INFO: Container discover ready: false, restart count 0 May 4 16:08:47.590: INFO: Container init ready: false, restart count 0 May 4 16:08:47.590: INFO: Container install ready: false, restart count 0 May 4 16:08:47.591: INFO: collectd-dhwfp started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:08:47.591: INFO: Container collectd ready: true, restart count 0 May 4 16:08:47.591: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:08:47.591: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:08:47.591: INFO: nodeport-test-qt2pt started at 2021-05-04 16:06:34 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.591: INFO: Container nodeport-test ready: true, restart count 0 May 4 16:08:47.591: INFO: pod-init-1999be94-f44c-495a-9753-5adddc9d351e started at 2021-05-04 16:08:42 +0000 UTC (2+1 container statuses recorded) May 4 16:08:47.591: INFO: Init container init1 ready: false, restart count 0 May 4 16:08:47.591: INFO: Init container init2 ready: false, restart count 0 May 4 16:08:47.591: INFO: Container run1 ready: false, restart count 0 May 4 16:08:47.591: INFO: liveness-2eec7a00-c1bb-43a0-8c2e-0a8c35203695 started at 2021-05-04 16:06:09 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.591: INFO: Container liveness ready: true, restart count 0 May 4 16:08:47.591: INFO: nginx-proxy-node2 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.591: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:08:47.591: INFO: test-rollover-deployment-5797c7764-xldpw started at 2021-05-04 16:08:32 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.591: INFO: Container agnhost ready: true, restart count 0 May 4 16:08:47.591: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.591: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 4 16:08:47.591: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.591: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:08:47.591: INFO: cmk-2fmbx started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:08:47.591: INFO: Container nodereport ready: true, restart count 0 May 4 16:08:47.591: INFO: Container reconcile ready: true, restart count 0 May 4 16:08:47.591: INFO: ss2-1 started at 2021-05-04 16:08:39 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.591: INFO: Container webserver ready: true, restart count 0 May 4 16:08:47.591: INFO: netserver-1 started at 2021-05-04 16:08:46 +0000 UTC (0+1 container statuses recorded) May 4 16:08:47.591: INFO: Container webserver ready: false, restart count 0 W0504 16:08:47.604108 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:08:48.075: INFO: Latency metrics for node node2 May 4 16:08:48.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6457" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • Failure [133.603 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:08:46.900: Unexpected error: <*errors.errorString | 0xc0050d09a0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32103 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32103 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1242 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":8,"skipped":198,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:06:42.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:08:42.316: INFO: Deleting pod "var-expansion-9794f91c-182f-43d1-82cd-33a014f1fbe9" in namespace "var-expansion-2615" May 4 16:08:42.320: INFO: Wait up to 5m0s for pod "var-expansion-9794f91c-182f-43d1-82cd-33a014f1fbe9" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:08:48.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2615" for this suite. • [SLOW TEST:126.058 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":-1,"completed":15,"skipped":270,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:08:42.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 4 16:08:42.217: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:08:57.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1246" for this suite. • [SLOW TEST:15.663 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":13,"skipped":212,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:08:48.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 4 16:08:56.665: INFO: Successfully updated pod "adopt-release-4hr25" STEP: Checking that the Job readopts the Pod May 4 16:08:56.665: INFO: Waiting up to 15m0s for pod "adopt-release-4hr25" in namespace "job-1560" to be "adopted" May 4 16:08:56.667: INFO: Pod "adopt-release-4hr25": Phase="Running", Reason="", readiness=true. Elapsed: 1.782185ms May 4 16:08:58.670: INFO: Pod "adopt-release-4hr25": Phase="Running", Reason="", readiness=true. Elapsed: 2.004320104s May 4 16:08:58.670: INFO: Pod "adopt-release-4hr25" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 4 16:08:59.179: INFO: Successfully updated pod "adopt-release-4hr25" STEP: Checking that the Job releases the Pod May 4 16:08:59.179: INFO: Waiting up to 15m0s for pod "adopt-release-4hr25" in namespace "job-1560" to be "released" May 4 16:08:59.181: INFO: Pod "adopt-release-4hr25": Phase="Running", Reason="", readiness=true. Elapsed: 2.58012ms May 4 16:09:01.184: INFO: Pod "adopt-release-4hr25": Phase="Running", Reason="", readiness=true. Elapsed: 2.005760755s May 4 16:09:01.184: INFO: Pod "adopt-release-4hr25" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:09:01.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1560" for this suite. • [SLOW TEST:13.069 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":9,"skipped":208,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:08:48.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:08:48.417: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 4 16:08:53.420: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 4 16:08:57.427: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 May 4 16:09:01.455: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-4427 /apis/apps/v1/namespaces/deployment-4427/deployments/test-cleanup-deployment 384c33fd-950a-4edd-b451-efa3b6f8f2ca 30186 1 2021-05-04 16:08:57 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2021-05-04 16:08:57 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-05-04 16:09:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001c88e88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-05-04 16:08:57 +0000 UTC,LastTransitionTime:2021-05-04 16:08:57 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-5d446bdd47" has successfully progressed.,LastUpdateTime:2021-05-04 16:09:00 +0000 UTC,LastTransitionTime:2021-05-04 16:08:57 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 4 16:09:01.458: INFO: New ReplicaSet "test-cleanup-deployment-5d446bdd47" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-5d446bdd47 deployment-4427 /apis/apps/v1/namespaces/deployment-4427/replicasets/test-cleanup-deployment-5d446bdd47 fba9511d-93f9-4c82-b125-c5903b74b10d 30175 1 2021-05-04 16:08:57 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 384c33fd-950a-4edd-b451-efa3b6f8f2ca 0xc001c892d7 0xc001c892d8}] [] [{kube-controller-manager Update apps/v1 2021-05-04 16:09:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"384c33fd-950a-4edd-b451-efa3b6f8f2ca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5d446bdd47,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001c89368 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 4 16:09:01.460: INFO: Pod "test-cleanup-deployment-5d446bdd47-kspsx" is available: &Pod{ObjectMeta:{test-cleanup-deployment-5d446bdd47-kspsx test-cleanup-deployment-5d446bdd47- deployment-4427 /api/v1/namespaces/deployment-4427/pods/test-cleanup-deployment-5d446bdd47-kspsx 4631b15f-d955-4c35-9b3b-5b64d1054f44 30174 0 2021-05-04 16:08:57 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.123" ], "mac": "16:f5:10:22:16:10", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.123" ], "mac": "16:f5:10:22:16:10", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-cleanup-deployment-5d446bdd47 fba9511d-93f9-4c82-b125-c5903b74b10d 0xc001c8976f 0xc001c89780}] [] [{kube-controller-manager Update v1 2021-05-04 16:08:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fba9511d-93f9-4c82-b125-c5903b74b10d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-04 16:08:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-04 16:09:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.123\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2bkfk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2bkfk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2bkfk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:08:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:09:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:09:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:08:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.4.123,StartTime:2021-05-04 16:08:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-04 16:09:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:docker://a7f5631510a743cdce9465f7f732d5c2e49aa0525989b95b9aa203386de54c51,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.123,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:09:01.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4427" for this suite. • [SLOW TEST:13.079 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":16,"skipped":294,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:09:01.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-c920651b-bfed-4afd-8adb-22216248bb5b STEP: Creating secret with name secret-projected-all-test-volume-79b86f66-239e-46ac-b467-9a6500e9edbe STEP: Creating a pod to test Check all projections for projected volume plugin May 4 16:09:01.270: INFO: Waiting up to 5m0s for pod "projected-volume-2441a7be-aa4f-4e03-b0c7-88c0c9e56f32" in namespace "projected-5014" to be "Succeeded or Failed" May 4 16:09:01.273: INFO: Pod "projected-volume-2441a7be-aa4f-4e03-b0c7-88c0c9e56f32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.414738ms May 4 16:09:03.276: INFO: Pod "projected-volume-2441a7be-aa4f-4e03-b0c7-88c0c9e56f32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005793157s May 4 16:09:05.280: INFO: Pod "projected-volume-2441a7be-aa4f-4e03-b0c7-88c0c9e56f32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009364313s STEP: Saw pod success May 4 16:09:05.280: INFO: Pod "projected-volume-2441a7be-aa4f-4e03-b0c7-88c0c9e56f32" satisfied condition "Succeeded or Failed" May 4 16:09:05.282: INFO: Trying to get logs from node node1 pod projected-volume-2441a7be-aa4f-4e03-b0c7-88c0c9e56f32 container projected-all-volume-test: STEP: delete the pod May 4 16:09:05.295: INFO: Waiting for pod projected-volume-2441a7be-aa4f-4e03-b0c7-88c0c9e56f32 to disappear May 4 16:09:05.297: INFO: Pod projected-volume-2441a7be-aa4f-4e03-b0c7-88c0c9e56f32 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:09:05.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5014" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":223,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:06:36.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-407c14ad-7a20-4e46-8321-7d673d64b89e in namespace container-probe-5504 May 4 16:06:42.811: INFO: Started pod liveness-407c14ad-7a20-4e46-8321-7d673d64b89e in namespace container-probe-5504 STEP: checking the pod's current state and verifying that restartCount is present May 4 16:06:42.813: INFO: Initial restart count of pod liveness-407c14ad-7a20-4e46-8321-7d673d64b89e is 0 May 4 16:06:54.831: INFO: Restart count of pod container-probe-5504/liveness-407c14ad-7a20-4e46-8321-7d673d64b89e is now 1 (12.018215804s elapsed) May 4 16:07:16.864: INFO: Restart count of pod container-probe-5504/liveness-407c14ad-7a20-4e46-8321-7d673d64b89e is now 2 (34.051042047s elapsed) May 4 16:07:34.895: INFO: Restart count of pod container-probe-5504/liveness-407c14ad-7a20-4e46-8321-7d673d64b89e is now 3 (52.081864115s elapsed) May 4 16:07:54.931: INFO: Restart count of pod container-probe-5504/liveness-407c14ad-7a20-4e46-8321-7d673d64b89e is now 4 (1m12.117765255s elapsed) May 4 16:09:07.055: INFO: Restart count of pod container-probe-5504/liveness-407c14ad-7a20-4e46-8321-7d673d64b89e is now 5 (2m24.241446196s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:09:07.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5504" for this suite. • [SLOW TEST:150.296 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":498,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:08:46.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-4191 STEP: creating a selector STEP: Creating the service pods in kubernetes May 4 16:08:46.603: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 4 16:08:46.645: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 4 16:08:48.648: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 4 16:08:50.648: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 16:08:52.647: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 16:08:54.648: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 16:08:56.648: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 16:08:58.647: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 16:09:00.648: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 16:09:02.647: INFO: The status of Pod netserver-0 is Running (Ready = true) May 4 16:09:02.651: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 4 16:09:06.684: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.4.122:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4191 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 4 16:09:06.684: INFO: >>> kubeConfig: /root/.kube/config May 4 16:09:07.210: INFO: Found all expected endpoints: [netserver-0] May 4 16:09:07.212: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.3.156:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4191 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 4 16:09:07.212: INFO: >>> kubeConfig: /root/.kube/config May 4 16:09:07.506: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:09:07.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4191" for this suite. • [SLOW TEST:20.930 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:03:57.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-9789 [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-9789 STEP: Creating statefulset with conflicting port in namespace statefulset-9789 STEP: Waiting until pod test-pod will start running in namespace statefulset-9789 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9789 May 4 16:09:07.939: FAIL: Pod ss-0 expected to be re-created at least once Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func10.2.12() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:809 +0x1258 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000179e00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc000179e00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc000179e00, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 4 16:09:07.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9789 describe po test-pod' May 4 16:09:08.139: INFO: stderr: "" May 4 16:09:08.139: INFO: stdout: "Name: test-pod\nNamespace: statefulset-9789\nPriority: 0\nNode: node2/10.10.190.208\nStart Time: Tue, 04 May 2021 16:03:57 +0000\nLabels: \nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.51\"\n ],\n \"mac\": \"76:83:f7:08:b5:57\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.51\"\n ],\n \"mac\": \"76:83:f7:08:b5:57\",\n \"default\": true,\n \"dns\": {}\n }]\n kubernetes.io/psp: privileged\nStatus: Running\nIP: 10.244.3.51\nIPs:\n IP: 10.244.3.51\nContainers:\n webserver:\n Container ID: docker://cac58fd484f2696b9e2a12887a161d99e12322e8f0005abddab9cf005c66a6d2\n Image: docker.io/library/httpd:2.4.38-alpine\n Image ID: docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\n Port: 21017/TCP\n Host Port: 21017/TCP\n State: Running\n Started: Tue, 04 May 2021 16:04:06 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-k5tnt (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-k5tnt:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-k5tnt\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal AddedInterface 5m8s multus Add eth0 [10.244.3.51/24]\n Normal Pulling 5m8s kubelet Pulling image \"docker.io/library/httpd:2.4.38-alpine\"\n Normal Pulled 5m3s kubelet Successfully pulled image \"docker.io/library/httpd:2.4.38-alpine\" in 5.23444302s\n Normal Created 5m2s kubelet Created container webserver\n Normal Started 5m2s kubelet Started container webserver\n" May 4 16:09:08.139: INFO: Output of kubectl describe test-pod: Name: test-pod Namespace: statefulset-9789 Priority: 0 Node: node2/10.10.190.208 Start Time: Tue, 04 May 2021 16:03:57 +0000 Labels: Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.51" ], "mac": "76:83:f7:08:b5:57", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.51" ], "mac": "76:83:f7:08:b5:57", "default": true, "dns": {} }] kubernetes.io/psp: privileged Status: Running IP: 10.244.3.51 IPs: IP: 10.244.3.51 Containers: webserver: Container ID: docker://cac58fd484f2696b9e2a12887a161d99e12322e8f0005abddab9cf005c66a6d2 Image: docker.io/library/httpd:2.4.38-alpine Image ID: docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 Port: 21017/TCP Host Port: 21017/TCP State: Running Started: Tue, 04 May 2021 16:04:06 +0000 Ready: True Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-k5tnt (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-k5tnt: Type: Secret (a volume populated by a Secret) SecretName: default-token-k5tnt Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal AddedInterface 5m8s multus Add eth0 [10.244.3.51/24] Normal Pulling 5m8s kubelet Pulling image "docker.io/library/httpd:2.4.38-alpine" Normal Pulled 5m3s kubelet Successfully pulled image "docker.io/library/httpd:2.4.38-alpine" in 5.23444302s Normal Created 5m2s kubelet Created container webserver Normal Started 5m2s kubelet Started container webserver May 4 16:09:08.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9789 logs test-pod --tail=100' May 4 16:09:08.312: INFO: stderr: "" May 4 16:09:08.312: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.3.51. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.3.51. Set the 'ServerName' directive globally to suppress this message\n[Tue May 04 16:04:06.235235 2021] [mpm_event:notice] [pid 1:tid 139727337552744] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Tue May 04 16:04:06.235279 2021] [core:notice] [pid 1:tid 139727337552744] AH00094: Command line: 'httpd -D FOREGROUND'\n" May 4 16:09:08.312: INFO: Last 100 log lines of test-pod: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.3.51. Set the 'ServerName' directive globally to suppress this message AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.3.51. Set the 'ServerName' directive globally to suppress this message [Tue May 04 16:04:06.235235 2021] [mpm_event:notice] [pid 1:tid 139727337552744] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations [Tue May 04 16:04:06.235279 2021] [core:notice] [pid 1:tid 139727337552744] AH00094: Command line: 'httpd -D FOREGROUND' May 4 16:09:08.312: INFO: Deleting all statefulset in ns statefulset-9789 May 4 16:09:08.315: INFO: Scaling statefulset ss to 0 May 4 16:09:08.325: INFO: Waiting for statefulset status.replicas updated to 0 May 4 16:09:08.328: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "statefulset-9789". STEP: Found 8 events. May 4 16:09:08.340: INFO: At 2021-05-04 16:03:57 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: []] May 4 16:09:08.340: INFO: At 2021-05-04 16:03:57 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100]] May 4 16:09:08.340: INFO: At 2021-05-04 16:03:58 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104]] May 4 16:09:08.340: INFO: At 2021-05-04 16:04:00 +0000 UTC - event for test-pod: {multus } AddedInterface: Add eth0 [10.244.3.51/24] May 4 16:09:08.340: INFO: At 2021-05-04 16:04:00 +0000 UTC - event for test-pod: {kubelet node2} Pulling: Pulling image "docker.io/library/httpd:2.4.38-alpine" May 4 16:09:08.340: INFO: At 2021-05-04 16:04:05 +0000 UTC - event for test-pod: {kubelet node2} Pulled: Successfully pulled image "docker.io/library/httpd:2.4.38-alpine" in 5.23444302s May 4 16:09:08.340: INFO: At 2021-05-04 16:04:06 +0000 UTC - event for test-pod: {kubelet node2} Created: Created container webserver May 4 16:09:08.340: INFO: At 2021-05-04 16:04:06 +0000 UTC - event for test-pod: {kubelet node2} Started: Started container webserver May 4 16:09:08.342: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:09:08.342: INFO: test-pod node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:03:57 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:06 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:04:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:03:57 +0000 UTC }] May 4 16:09:08.342: INFO: May 4 16:09:08.346: INFO: Logging node info for node master1 May 4 16:09:08.348: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 db982204-549e-4532-90a7-a4410878cfc9 30413 0 2021-05-04 14:43:01 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"3e:f0:43:cb:66:52"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-04 14:51:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:46 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:09:06 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:09:06 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:09:06 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:09:06 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:88a0771919594d4187f6704fc7592bf8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8e0a253b-2aa4-4467-879e-567e7ba1ffa4,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:09:08.349: INFO: Logging kubelet events for node master1 May 4 16:09:08.351: INFO: Logging pods the kubelet thinks is on node master1 May 4 16:09:08.371: INFO: kube-apiserver-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.371: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:09:08.371: INFO: kube-controller-manager-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.371: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:09:08.371: INFO: kube-proxy-8j6ch started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.371: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:09:08.371: INFO: docker-registry-docker-registry-56cbc7bc58-zhf8t started at 2021-05-04 14:48:42 +0000 UTC (0+2 container statuses recorded) May 4 16:09:08.371: INFO: Container docker-registry ready: true, restart count 0 May 4 16:09:08.371: INFO: Container nginx ready: true, restart count 0 May 4 16:09:08.371: INFO: node-exporter-jckjs started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:09:08.371: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:09:08.371: INFO: Container node-exporter ready: true, restart count 0 May 4 16:09:08.371: INFO: kube-scheduler-master1 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.371: INFO: Container kube-scheduler ready: true, restart count 0 May 4 16:09:08.371: INFO: kube-flannel-qspzk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:09:08.371: INFO: Init container install-cni ready: true, restart count 0 May 4 16:09:08.371: INFO: Container kube-flannel ready: true, restart count 3 May 4 16:09:08.371: INFO: kube-multus-ds-amd64-jflvf started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.371: INFO: Container kube-multus ready: true, restart count 1 May 4 16:09:08.371: INFO: coredns-7677f9bb54-qvcd2 started at 2021-05-04 14:46:11 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.371: INFO: Container coredns ready: true, restart count 1 May 4 16:09:08.371: INFO: node-feature-discovery-controller-5bf5c49849-72rn6 started at 2021-05-04 14:51:52 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.371: INFO: Container nfd-controller ready: true, restart count 0 W0504 16:09:08.382957 34 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:09:08.413: INFO: Latency metrics for node master1 May 4 16:09:08.413: INFO: Logging node info for node master2 May 4 16:09:08.416: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 e2c15170-247b-4e7b-b818-abc807948bf8 30345 0 2021-05-04 14:43:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:e0:10:a0:e0:62"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:32 +0000 UTC,LastTransitionTime:2021-05-04 14:47:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:09:06 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:09:06 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:09:06 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:09:06 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6af568f56589422a9bd68e0270ce0f8c,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:bf27bb77-fad2-4b52-85c3-acb5113fc512,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:09:08.416: INFO: Logging kubelet events for node master2 May 4 16:09:08.418: INFO: Logging pods the kubelet thinks is on node master2 May 4 16:09:08.426: INFO: kube-proxy-6b5t8 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.426: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:09:08.426: INFO: kube-flannel-cxdfr started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:09:08.426: INFO: Init container install-cni ready: true, restart count 0 May 4 16:09:08.426: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:09:08.426: INFO: kube-multus-ds-amd64-dw8tg started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.426: INFO: Container kube-multus ready: true, restart count 1 May 4 16:09:08.426: INFO: dns-autoscaler-5b7b5c9b6f-zbrsq started at 2021-05-04 14:46:08 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.426: INFO: Container autoscaler ready: true, restart count 1 May 4 16:09:08.426: INFO: node-exporter-9c6qf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:09:08.426: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:09:08.426: INFO: Container node-exporter ready: true, restart count 0 May 4 16:09:08.426: INFO: kube-apiserver-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.426: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:09:08.426: INFO: kube-controller-manager-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.426: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:09:08.426: INFO: kube-scheduler-master2 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.426: INFO: Container kube-scheduler ready: true, restart count 2 W0504 16:09:08.440009 34 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:09:08.463: INFO: Latency metrics for node master2 May 4 16:09:08.463: INFO: Logging node info for node master3 May 4 16:09:08.465: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 b533a646-667e-403c-944b-71dec9cc4851 30343 0 2021-05-04 14:43:51 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:b0:53:14:f6:c9"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:48:50 +0000 UTC,LastTransitionTime:2021-05-04 14:48:50 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:09:06 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:09:06 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:09:06 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:09:06 +0000 UTC,LastTransitionTime:2021-05-04 14:45:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:959373dcda56494486f0c2bb0bb496cc,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:0714ca81-c21e-40d6-a288-48d597238e54,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:09:08.466: INFO: Logging kubelet events for node master3 May 4 16:09:08.468: INFO: Logging pods the kubelet thinks is on node master3 May 4 16:09:08.474: INFO: kube-multus-ds-amd64-cgwz2 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.475: INFO: Container kube-multus ready: true, restart count 1 May 4 16:09:08.475: INFO: coredns-7677f9bb54-pshfb started at 2021-05-04 14:46:06 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.475: INFO: Container coredns ready: true, restart count 1 May 4 16:09:08.475: INFO: node-exporter-wvppn started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:09:08.475: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:09:08.475: INFO: Container node-exporter ready: true, restart count 0 May 4 16:09:08.475: INFO: kube-apiserver-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.475: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:09:08.475: INFO: kube-controller-manager-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.475: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:09:08.475: INFO: kube-scheduler-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.475: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:09:08.475: INFO: kube-proxy-2p5b6 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.475: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:09:08.475: INFO: kube-flannel-wznt8 started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:09:08.475: INFO: Init container install-cni ready: true, restart count 0 May 4 16:09:08.475: INFO: Container kube-flannel ready: true, restart count 1 W0504 16:09:08.488124 34 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:09:08.514: INFO: Latency metrics for node master3 May 4 16:09:08.514: INFO: Logging node info for node node1 May 4 16:09:08.517: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 b8cf7e16-d5c7-4e2c-996a-93d93bd4fa1c 30436 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"e2:50:df:03:d2:13"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:54:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:09:07 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:09:07 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:09:07 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:09:07 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bffc023a4ab84df0b0181bc7b8f509e2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:dc08af72-abca-4f1d-bd0f-0e8d8eb97de5,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[@ :],SizeBytes:1002569035,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:371dc6bf7e0c7ce112a29341b000c40d840aef1dbb4fdcb3ae5c0597e28f3061 golang:alpine3.12],SizeBytes:301097267,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:09:08.518: INFO: Logging kubelet events for node node1 May 4 16:09:08.521: INFO: Logging pods the kubelet thinks is on node node1 May 4 16:09:08.543: INFO: kube-multus-ds-amd64-pkmbz started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.543: INFO: Container kube-multus ready: true, restart count 1 May 4 16:09:08.543: INFO: ss2-0 started at 2021-05-04 16:08:36 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.543: INFO: Container webserver ready: true, restart count 0 May 4 16:09:08.543: INFO: busybox-readonly-fs0c67b5aa-7f35-4f97-92a3-e9d361907ac0 started at 2021-05-04 16:09:07 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.543: INFO: Container busybox-readonly-fs0c67b5aa-7f35-4f97-92a3-e9d361907ac0 ready: false, restart count 0 May 4 16:09:08.543: INFO: affinity-nodeport-timeout-ncbmb started at 2021-05-04 16:08:04 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.543: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 May 4 16:09:08.543: INFO: ss2-2 started at 2021-05-04 16:08:29 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.543: INFO: Container webserver ready: true, restart count 0 May 4 16:09:08.543: INFO: netserver-0 started at 2021-05-04 16:08:46 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.543: INFO: Container webserver ready: true, restart count 0 May 4 16:09:08.543: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.543: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 16:09:08.543: INFO: cmk-slg76 started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:09:08.543: INFO: Container nodereport ready: true, restart count 0 May 4 16:09:08.543: INFO: Container reconcile ready: true, restart count 0 May 4 16:09:08.543: INFO: prometheus-k8s-0 started at 2021-05-04 14:56:12 +0000 UTC (0+5 container statuses recorded) May 4 16:09:08.543: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:09:08.543: INFO: Container grafana ready: true, restart count 0 May 4 16:09:08.543: INFO: Container prometheus ready: true, restart count 1 May 4 16:09:08.543: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:09:08.543: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 4 16:09:08.543: INFO: annotationupdate7a3cca28-0a99-4f19-b692-65183f455f64 started at 2021-05-04 16:09:07 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.543: INFO: Container client-container ready: false, restart count 0 May 4 16:09:08.543: INFO: test-webserver-4f7778a1-8d7e-4032-81c2-4098441cd02f started at 2021-05-04 16:09:01 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.543: INFO: Container test-webserver ready: false, restart count 0 May 4 16:09:08.543: INFO: kube-flannel-d6pbl started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:09:08.543: INFO: Init container install-cni ready: true, restart count 2 May 4 16:09:08.543: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:09:08.543: INFO: node-feature-discovery-worker-wfgl5 started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.543: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:09:08.543: INFO: prometheus-operator-5bb8cb9d8f-rrrhf started at 2021-05-04 14:56:03 +0000 UTC (0+2 container statuses recorded) May 4 16:09:08.543: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:09:08.543: INFO: Container prometheus-operator ready: true, restart count 0 May 4 16:09:08.543: INFO: node-exporter-k8qd9 started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:09:08.543: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:09:08.543: INFO: Container node-exporter ready: true, restart count 0 May 4 16:09:08.543: INFO: affinity-nodeport-q859k started at 2021-05-04 16:08:57 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.543: INFO: Container affinity-nodeport ready: true, restart count 0 May 4 16:09:08.543: INFO: collectd-4755t started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:09:08.543: INFO: Container collectd ready: true, restart count 0 May 4 16:09:08.543: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:09:08.543: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:09:08.543: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.543: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:09:08.543: INFO: pod-adoption-release-9sdfn started at 2021-05-04 16:07:10 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.543: INFO: Container pod-adoption-release ready: true, restart count 0 May 4 16:09:08.543: INFO: ss2-0 started at 2021-05-04 16:08:59 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.543: INFO: Container webserver ready: true, restart count 0 May 4 16:09:08.543: INFO: nginx-proxy-node1 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.544: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:09:08.544: INFO: kube-proxy-t2mbn started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.544: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:09:08.544: INFO: liveness-http started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.544: INFO: Container liveness-http ready: true, restart count 15 May 4 16:09:08.544: INFO: host-test-container-pod started at 2021-05-04 16:09:02 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.544: INFO: Container agnhost-container ready: true, restart count 0 May 4 16:09:08.544: INFO: cmk-init-discover-node1-m8vvw started at 2021-05-04 14:54:32 +0000 UTC (0+3 container statuses recorded) May 4 16:09:08.544: INFO: Container discover ready: false, restart count 0 May 4 16:09:08.544: INFO: Container init ready: false, restart count 0 May 4 16:09:08.544: INFO: Container install ready: false, restart count 0 May 4 16:09:08.544: INFO: affinity-nodeport-timeout-ksvxx started at 2021-05-04 16:08:04 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.544: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 May 4 16:09:08.544: INFO: execpod-affinityz74hp started at 2021-05-04 16:08:10 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.544: INFO: Container agnhost-container ready: true, restart count 0 May 4 16:09:08.544: INFO: e2e-test-httpd-pod started at 2021-05-04 16:09:05 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.544: INFO: Container e2e-test-httpd-pod ready: false, restart count 0 W0504 16:09:08.554766 34 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:09:08.770: INFO: Latency metrics for node node1 May 4 16:09:08.770: INFO: Logging node info for node node2 May 4 16:09:08.773: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 41567fa7-bb24-4381-9387-e4115195037d 30342 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:98:f5:3b:98:5c"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:09:06 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:09:06 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:09:06 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:09:06 +0000 UTC,LastTransitionTime:2021-05-04 14:45:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d47a1c7ab17f44f2ae7ff788700a8d74,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:df3523a1-a74f-4f8b-beb1-29f5ed8699f3,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:09:08.774: INFO: Logging kubelet events for node node2 May 4 16:09:08.776: INFO: Logging pods the kubelet thinks is on node node2 May 4 16:09:08.792: INFO: node-exporter-5lghf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:09:08.792: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:09:08.792: INFO: Container node-exporter ready: true, restart count 0 May 4 16:09:08.792: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x started at 2021-05-04 14:59:02 +0000 UTC (0+2 container statuses recorded) May 4 16:09:08.792: INFO: Container tas-controller ready: true, restart count 0 May 4 16:09:08.792: INFO: Container tas-extender ready: true, restart count 0 May 4 16:09:08.792: INFO: test-pod started at 2021-05-04 16:03:57 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.792: INFO: Container webserver ready: true, restart count 0 May 4 16:09:08.792: INFO: affinity-nodeport-tmr9l started at 2021-05-04 16:08:58 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.792: INFO: Container affinity-nodeport ready: true, restart count 0 May 4 16:09:08.792: INFO: liveness-exec started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.792: INFO: Container liveness-exec ready: true, restart count 6 May 4 16:09:08.792: INFO: affinity-nodeport-timeout-l62pm started at 2021-05-04 16:08:04 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.792: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 May 4 16:09:08.792: INFO: kube-proxy-rfjjf started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.792: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:09:08.792: INFO: node-feature-discovery-worker-jzjqs started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.792: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:09:08.792: INFO: adopt-release-d9sns started at 2021-05-04 16:08:59 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.792: INFO: Container c ready: true, restart count 0 May 4 16:09:08.792: INFO: test-container-pod started at 2021-05-04 16:09:02 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.792: INFO: Container webserver ready: true, restart count 0 May 4 16:09:08.792: INFO: kube-multus-ds-amd64-7r2s4 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.792: INFO: Container kube-multus ready: true, restart count 1 May 4 16:09:08.792: INFO: cmk-init-discover-node2-zlxzj started at 2021-05-04 14:54:52 +0000 UTC (0+3 container statuses recorded) May 4 16:09:08.792: INFO: Container discover ready: false, restart count 0 May 4 16:09:08.792: INFO: Container init ready: false, restart count 0 May 4 16:09:08.792: INFO: Container install ready: false, restart count 0 May 4 16:09:08.792: INFO: collectd-dhwfp started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:09:08.792: INFO: Container collectd ready: true, restart count 0 May 4 16:09:08.792: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:09:08.792: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:09:08.792: INFO: liveness-2eec7a00-c1bb-43a0-8c2e-0a8c35203695 started at 2021-05-04 16:06:09 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.792: INFO: Container liveness ready: true, restart count 0 May 4 16:09:08.792: INFO: adopt-release-4hr25 started at 2021-05-04 16:08:48 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.792: INFO: Container c ready: true, restart count 0 May 4 16:09:08.792: INFO: nginx-proxy-node2 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.792: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:09:08.792: INFO: adopt-release-8xqxv started at 2021-05-04 16:08:48 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.792: INFO: Container c ready: true, restart count 0 May 4 16:09:08.792: INFO: cmk-2fmbx started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:09:08.792: INFO: Container nodereport ready: true, restart count 0 May 4 16:09:08.792: INFO: Container reconcile ready: true, restart count 0 May 4 16:09:08.792: INFO: ss2-1 started at 2021-05-04 16:08:39 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.792: INFO: Container webserver ready: true, restart count 0 May 4 16:09:08.792: INFO: affinity-nodeport-vjvq8 started at 2021-05-04 16:08:57 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.792: INFO: Container affinity-nodeport ready: true, restart count 0 May 4 16:09:08.792: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.792: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 4 16:09:08.792: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.792: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:09:08.792: INFO: netserver-1 started at 2021-05-04 16:08:46 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.792: INFO: Container webserver ready: true, restart count 0 May 4 16:09:08.792: INFO: ss2-2 started at 2021-05-04 16:08:47 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.792: INFO: Container webserver ready: false, restart count 0 May 4 16:09:08.792: INFO: execpod-affinityl8j2v started at 2021-05-04 16:09:03 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.792: INFO: Container agnhost-container ready: true, restart count 0 May 4 16:09:08.792: INFO: ss2-1 started at 2021-05-04 16:08:40 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.792: INFO: Container webserver ready: true, restart count 0 May 4 16:09:08.792: INFO: kube-flannel-lnwkk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:09:08.792: INFO: Init container install-cni ready: true, restart count 2 May 4 16:09:08.792: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:09:08.792: INFO: cmk-webhook-6c9d5f8578-fr595 started at 2021-05-04 14:55:15 +0000 UTC (0+1 container statuses recorded) May 4 16:09:08.792: INFO: Container cmk-webhook ready: true, restart count 0 W0504 16:09:08.806743 34 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:09:08.839: INFO: Latency metrics for node node2 May 4 16:09:08.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9789" for this suite. • Failure [310.966 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:09:07.939: Pod ss-0 expected to be re-created at least once /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:809 ------------------------------ {"msg":"FAILED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":1,"skipped":20,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:09:05.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 4 16:09:05.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6205 run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod' May 4 16:09:05.670: INFO: stderr: "" May 4 16:09:05.670: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run May 4 16:09:05.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6205 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "docker.io/library/busybox:1.29"}]}} --dry-run=server' May 4 16:09:05.966: INFO: stderr: "" May 4 16:09:05.966: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/httpd:2.4.38-alpine May 4 16:09:05.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6205 delete pods e2e-test-httpd-pod' May 4 16:09:09.794: INFO: stderr: "" May 4 16:09:09.794: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:09:09.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6205" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":11,"skipped":237,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SS ------------------------------ [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:09:07.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:09:13.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-286" for this suite. • [SLOW TEST:6.057 seconds] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a read only busybox container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:188 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":536,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:09:13.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:09:13.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-2009" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":-1,"completed":18,"skipped":557,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:09:09.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 4 16:09:09.843: INFO: Waiting up to 5m0s for pod "downwardapi-volume-209daeb0-361f-4548-941d-fbc31044e1e4" in namespace "projected-6674" to be "Succeeded or Failed" May 4 16:09:09.846: INFO: Pod "downwardapi-volume-209daeb0-361f-4548-941d-fbc31044e1e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.993458ms May 4 16:09:11.848: INFO: Pod "downwardapi-volume-209daeb0-361f-4548-941d-fbc31044e1e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005431899s May 4 16:09:13.851: INFO: Pod "downwardapi-volume-209daeb0-361f-4548-941d-fbc31044e1e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008343652s STEP: Saw pod success May 4 16:09:13.851: INFO: Pod "downwardapi-volume-209daeb0-361f-4548-941d-fbc31044e1e4" satisfied condition "Succeeded or Failed" May 4 16:09:13.853: INFO: Trying to get logs from node node2 pod downwardapi-volume-209daeb0-361f-4548-941d-fbc31044e1e4 container client-container: STEP: delete the pod May 4 16:09:13.866: INFO: Waiting for pod downwardapi-volume-209daeb0-361f-4548-941d-fbc31044e1e4 to disappear May 4 16:09:13.868: INFO: Pod downwardapi-volume-209daeb0-361f-4548-941d-fbc31044e1e4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:09:13.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6674" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":239,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":388,"failed":0} [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:09:07.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 4 16:09:14.096: INFO: Successfully updated pod "annotationupdate7a3cca28-0a99-4f19-b692-65183f455f64" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:09:16.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1455" for this suite. • [SLOW TEST:8.599 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":388,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:09:13.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 4 16:09:13.759: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 4 16:09:15.767: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741353, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741353, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741353, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741353, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 4 16:09:18.778: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:09:18.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5775" for this suite. STEP: Destroying namespace "webhook-5775-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.592 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":19,"skipped":560,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:09:13.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:09:13.928: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-9577 I0504 16:09:13.948232 27 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9577, replica count: 1 I0504 16:09:14.998831 27 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0504 16:09:15.999239 27 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0504 16:09:17.000516 27 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0504 16:09:18.000797 27 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 4 16:09:18.107: INFO: Created: latency-svc-9x2g8 May 4 16:09:18.111: INFO: Got endpoints: latency-svc-9x2g8 [10.528468ms] May 4 16:09:18.117: INFO: Created: latency-svc-m84lh May 4 16:09:18.119: INFO: Got endpoints: latency-svc-m84lh [7.751941ms] May 4 16:09:18.120: INFO: Created: latency-svc-xq45m May 4 16:09:18.123: INFO: Created: latency-svc-9qmkm May 4 16:09:18.123: INFO: Got endpoints: latency-svc-xq45m [11.209694ms] May 4 16:09:18.126: INFO: Got endpoints: latency-svc-9qmkm [14.799121ms] May 4 16:09:18.127: INFO: Created: latency-svc-jhhk4 May 4 16:09:18.130: INFO: Got endpoints: latency-svc-jhhk4 [17.898334ms] May 4 16:09:18.130: INFO: Created: latency-svc-t6gsj May 4 16:09:18.132: INFO: Got endpoints: latency-svc-t6gsj [20.239857ms] May 4 16:09:18.132: INFO: Created: latency-svc-cc7lt May 4 16:09:18.135: INFO: Got endpoints: latency-svc-cc7lt [22.78256ms] May 4 16:09:18.136: INFO: Created: latency-svc-rtfw5 May 4 16:09:18.138: INFO: Got endpoints: latency-svc-rtfw5 [26.067149ms] May 4 16:09:18.139: INFO: Created: latency-svc-lv6nw May 4 16:09:18.140: INFO: Got endpoints: latency-svc-lv6nw [28.736033ms] May 4 16:09:18.141: INFO: Created: latency-svc-ggjn5 May 4 16:09:18.144: INFO: Got endpoints: latency-svc-ggjn5 [32.196033ms] May 4 16:09:18.144: INFO: Created: latency-svc-pbzc9 May 4 16:09:18.146: INFO: Got endpoints: latency-svc-pbzc9 [34.724983ms] May 4 16:09:18.148: INFO: Created: latency-svc-cnzk5 May 4 16:09:18.150: INFO: Got endpoints: latency-svc-cnzk5 [37.877221ms] May 4 16:09:18.150: INFO: Created: latency-svc-vhltf May 4 16:09:18.152: INFO: Got endpoints: latency-svc-vhltf [40.334541ms] May 4 16:09:18.153: INFO: Created: latency-svc-k2tgr May 4 16:09:18.155: INFO: Got endpoints: latency-svc-k2tgr [43.313932ms] May 4 16:09:18.156: INFO: Created: latency-svc-qdptr May 4 16:09:18.158: INFO: Created: latency-svc-xhsr4 May 4 16:09:18.159: INFO: Got endpoints: latency-svc-qdptr [46.822619ms] May 4 16:09:18.160: INFO: Got endpoints: latency-svc-xhsr4 [48.444216ms] May 4 16:09:18.162: INFO: Created: latency-svc-q479b May 4 16:09:18.164: INFO: Got endpoints: latency-svc-q479b [44.727382ms] May 4 16:09:18.165: INFO: Created: latency-svc-92bwf May 4 16:09:18.166: INFO: Created: latency-svc-kxv2j May 4 16:09:18.167: INFO: Got endpoints: latency-svc-92bwf [43.916898ms] May 4 16:09:18.169: INFO: Got endpoints: latency-svc-kxv2j [42.337427ms] May 4 16:09:18.170: INFO: Created: latency-svc-qgmf5 May 4 16:09:18.172: INFO: Got endpoints: latency-svc-qgmf5 [42.289965ms] May 4 16:09:18.173: INFO: Created: latency-svc-ft925 May 4 16:09:18.175: INFO: Got endpoints: latency-svc-ft925 [43.281057ms] May 4 16:09:18.177: INFO: Created: latency-svc-wz9xp May 4 16:09:18.179: INFO: Got endpoints: latency-svc-wz9xp [44.347717ms] May 4 16:09:18.179: INFO: Created: latency-svc-2fljz May 4 16:09:18.181: INFO: Got endpoints: latency-svc-2fljz [43.362055ms] May 4 16:09:18.182: INFO: Created: latency-svc-ln8kf May 4 16:09:18.184: INFO: Got endpoints: latency-svc-ln8kf [43.661288ms] May 4 16:09:18.185: INFO: Created: latency-svc-ndshh May 4 16:09:18.188: INFO: Got endpoints: latency-svc-ndshh [44.034893ms] May 4 16:09:18.189: INFO: Created: latency-svc-tg4pb May 4 16:09:18.191: INFO: Got endpoints: latency-svc-tg4pb [44.190186ms] May 4 16:09:18.192: INFO: Created: latency-svc-nxksp May 4 16:09:18.194: INFO: Got endpoints: latency-svc-nxksp [43.903779ms] May 4 16:09:18.194: INFO: Created: latency-svc-dmkrr May 4 16:09:18.196: INFO: Got endpoints: latency-svc-dmkrr [44.130639ms] May 4 16:09:18.197: INFO: Created: latency-svc-5wktq May 4 16:09:18.199: INFO: Got endpoints: latency-svc-5wktq [44.170496ms] May 4 16:09:18.200: INFO: Created: latency-svc-l4qcw May 4 16:09:18.202: INFO: Got endpoints: latency-svc-l4qcw [43.086228ms] May 4 16:09:18.202: INFO: Created: latency-svc-7jzzh May 4 16:09:18.205: INFO: Got endpoints: latency-svc-7jzzh [44.564984ms] May 4 16:09:18.206: INFO: Created: latency-svc-l5t6k May 4 16:09:18.209: INFO: Created: latency-svc-6h2cx May 4 16:09:18.210: INFO: Got endpoints: latency-svc-l5t6k [45.820581ms] May 4 16:09:18.211: INFO: Created: latency-svc-9gpvh May 4 16:09:18.214: INFO: Created: latency-svc-9r4w6 May 4 16:09:18.217: INFO: Created: latency-svc-rbqzn May 4 16:09:18.218: INFO: Created: latency-svc-t7pt2 May 4 16:09:18.221: INFO: Created: latency-svc-lpm8t May 4 16:09:18.225: INFO: Created: latency-svc-9m662 May 4 16:09:18.226: INFO: Created: latency-svc-8b9tq May 4 16:09:18.229: INFO: Created: latency-svc-h7rd8 May 4 16:09:18.232: INFO: Created: latency-svc-l6gq8 May 4 16:09:18.234: INFO: Created: latency-svc-7m7t9 May 4 16:09:18.237: INFO: Created: latency-svc-txs5h May 4 16:09:18.239: INFO: Created: latency-svc-6jlfg May 4 16:09:18.242: INFO: Created: latency-svc-dlsmv May 4 16:09:18.244: INFO: Created: latency-svc-5z6x9 May 4 16:09:18.260: INFO: Got endpoints: latency-svc-6h2cx [93.093944ms] May 4 16:09:18.265: INFO: Created: latency-svc-b5wtt May 4 16:09:18.310: INFO: Got endpoints: latency-svc-9gpvh [140.92511ms] May 4 16:09:18.315: INFO: Created: latency-svc-x2sck May 4 16:09:18.362: INFO: Got endpoints: latency-svc-9r4w6 [190.079403ms] May 4 16:09:18.368: INFO: Created: latency-svc-v5rrm May 4 16:09:18.410: INFO: Got endpoints: latency-svc-rbqzn [234.342999ms] May 4 16:09:18.415: INFO: Created: latency-svc-4tzwc May 4 16:09:18.460: INFO: Got endpoints: latency-svc-t7pt2 [281.13817ms] May 4 16:09:18.465: INFO: Created: latency-svc-46s9v May 4 16:09:18.509: INFO: Got endpoints: latency-svc-lpm8t [328.157657ms] May 4 16:09:18.515: INFO: Created: latency-svc-hrq2h May 4 16:09:18.561: INFO: Got endpoints: latency-svc-9m662 [376.483533ms] May 4 16:09:18.566: INFO: Created: latency-svc-gdk5j May 4 16:09:18.610: INFO: Got endpoints: latency-svc-8b9tq [422.588087ms] May 4 16:09:18.615: INFO: Created: latency-svc-ff2fg May 4 16:09:18.660: INFO: Got endpoints: latency-svc-h7rd8 [468.869971ms] May 4 16:09:18.665: INFO: Created: latency-svc-zcn49 May 4 16:09:18.710: INFO: Got endpoints: latency-svc-l6gq8 [516.577595ms] May 4 16:09:18.716: INFO: Created: latency-svc-l6fsx May 4 16:09:18.760: INFO: Got endpoints: latency-svc-7m7t9 [563.830213ms] May 4 16:09:18.765: INFO: Created: latency-svc-gk9ll May 4 16:09:18.810: INFO: Got endpoints: latency-svc-txs5h [610.378438ms] May 4 16:09:18.815: INFO: Created: latency-svc-2nxqc May 4 16:09:18.860: INFO: Got endpoints: latency-svc-6jlfg [658.222242ms] May 4 16:09:18.866: INFO: Created: latency-svc-7ln48 May 4 16:09:18.911: INFO: Got endpoints: latency-svc-dlsmv [705.531183ms] May 4 16:09:18.916: INFO: Created: latency-svc-k4hpx May 4 16:09:18.960: INFO: Got endpoints: latency-svc-5z6x9 [749.95099ms] May 4 16:09:18.966: INFO: Created: latency-svc-pksr6 May 4 16:09:19.009: INFO: Got endpoints: latency-svc-b5wtt [749.595169ms] May 4 16:09:19.015: INFO: Created: latency-svc-7ssjc May 4 16:09:19.060: INFO: Got endpoints: latency-svc-x2sck [749.994183ms] May 4 16:09:19.065: INFO: Created: latency-svc-mpdvx May 4 16:09:19.110: INFO: Got endpoints: latency-svc-v5rrm [747.791362ms] May 4 16:09:19.115: INFO: Created: latency-svc-skmx8 May 4 16:09:19.160: INFO: Got endpoints: latency-svc-4tzwc [749.905322ms] May 4 16:09:19.165: INFO: Created: latency-svc-hql4g May 4 16:09:19.210: INFO: Got endpoints: latency-svc-46s9v [749.751986ms] May 4 16:09:19.215: INFO: Created: latency-svc-9k868 May 4 16:09:19.260: INFO: Got endpoints: latency-svc-hrq2h [750.530051ms] May 4 16:09:19.266: INFO: Created: latency-svc-z7k4c May 4 16:09:19.310: INFO: Got endpoints: latency-svc-gdk5j [749.838602ms] May 4 16:09:19.327: INFO: Created: latency-svc-kng8l May 4 16:09:19.361: INFO: Got endpoints: latency-svc-ff2fg [750.174602ms] May 4 16:09:19.367: INFO: Created: latency-svc-zbwlh May 4 16:09:19.410: INFO: Got endpoints: latency-svc-zcn49 [750.787805ms] May 4 16:09:19.416: INFO: Created: latency-svc-787n7 May 4 16:09:19.460: INFO: Got endpoints: latency-svc-l6fsx [750.22788ms] May 4 16:09:19.467: INFO: Created: latency-svc-zxb5v May 4 16:09:19.510: INFO: Got endpoints: latency-svc-gk9ll [750.150924ms] May 4 16:09:19.516: INFO: Created: latency-svc-4r8zk May 4 16:09:19.560: INFO: Got endpoints: latency-svc-2nxqc [750.12287ms] May 4 16:09:19.566: INFO: Created: latency-svc-7c8vg May 4 16:09:19.610: INFO: Got endpoints: latency-svc-7ln48 [749.578144ms] May 4 16:09:19.616: INFO: Created: latency-svc-m6x6z May 4 16:09:19.710: INFO: Got endpoints: latency-svc-k4hpx [799.507194ms] May 4 16:09:19.715: INFO: Created: latency-svc-2pz9p May 4 16:09:19.760: INFO: Got endpoints: latency-svc-pksr6 [800.133395ms] May 4 16:09:19.765: INFO: Created: latency-svc-28jns May 4 16:09:19.810: INFO: Got endpoints: latency-svc-7ssjc [800.737785ms] May 4 16:09:19.815: INFO: Created: latency-svc-j72pb May 4 16:09:19.861: INFO: Got endpoints: latency-svc-mpdvx [800.492106ms] May 4 16:09:19.867: INFO: Created: latency-svc-d64gl May 4 16:09:19.910: INFO: Got endpoints: latency-svc-skmx8 [800.449755ms] May 4 16:09:19.916: INFO: Created: latency-svc-rrxff May 4 16:09:19.960: INFO: Got endpoints: latency-svc-hql4g [799.98249ms] May 4 16:09:19.965: INFO: Created: latency-svc-7kh9j May 4 16:09:20.010: INFO: Got endpoints: latency-svc-9k868 [799.778741ms] May 4 16:09:20.016: INFO: Created: latency-svc-gf9l4 May 4 16:09:20.060: INFO: Got endpoints: latency-svc-z7k4c [799.902103ms] May 4 16:09:20.065: INFO: Created: latency-svc-jqxt4 May 4 16:09:20.110: INFO: Got endpoints: latency-svc-kng8l [799.487671ms] May 4 16:09:20.115: INFO: Created: latency-svc-lddtm May 4 16:09:20.161: INFO: Got endpoints: latency-svc-zbwlh [799.937919ms] May 4 16:09:20.168: INFO: Created: latency-svc-nsqwx May 4 16:09:20.210: INFO: Got endpoints: latency-svc-787n7 [799.324905ms] May 4 16:09:20.215: INFO: Created: latency-svc-t98p4 May 4 16:09:20.261: INFO: Got endpoints: latency-svc-zxb5v [800.006844ms] May 4 16:09:20.265: INFO: Created: latency-svc-qsp8q May 4 16:09:20.311: INFO: Got endpoints: latency-svc-4r8zk [800.208022ms] May 4 16:09:20.317: INFO: Created: latency-svc-jmmvk May 4 16:09:20.360: INFO: Got endpoints: latency-svc-7c8vg [799.819395ms] May 4 16:09:20.367: INFO: Created: latency-svc-llwfb May 4 16:09:20.410: INFO: Got endpoints: latency-svc-m6x6z [800.578066ms] May 4 16:09:20.416: INFO: Created: latency-svc-sl5rd May 4 16:09:20.460: INFO: Got endpoints: latency-svc-2pz9p [749.776144ms] May 4 16:09:20.465: INFO: Created: latency-svc-dr5rv May 4 16:09:20.510: INFO: Got endpoints: latency-svc-28jns [749.591157ms] May 4 16:09:20.515: INFO: Created: latency-svc-b2dxh May 4 16:09:20.560: INFO: Got endpoints: latency-svc-j72pb [749.634227ms] May 4 16:09:20.565: INFO: Created: latency-svc-m5gcw May 4 16:09:20.610: INFO: Got endpoints: latency-svc-d64gl [749.133063ms] May 4 16:09:20.615: INFO: Created: latency-svc-cx9rs May 4 16:09:20.710: INFO: Got endpoints: latency-svc-rrxff [799.493397ms] May 4 16:09:20.715: INFO: Created: latency-svc-nmdbq May 4 16:09:20.761: INFO: Got endpoints: latency-svc-7kh9j [800.78132ms] May 4 16:09:20.767: INFO: Created: latency-svc-2k9fn May 4 16:09:20.810: INFO: Got endpoints: latency-svc-gf9l4 [800.08429ms] May 4 16:09:20.815: INFO: Created: latency-svc-8wbz7 May 4 16:09:20.860: INFO: Got endpoints: latency-svc-jqxt4 [799.961435ms] May 4 16:09:20.866: INFO: Created: latency-svc-24nbz May 4 16:09:20.910: INFO: Got endpoints: latency-svc-lddtm [799.830905ms] May 4 16:09:20.915: INFO: Created: latency-svc-l9cm6 May 4 16:09:20.960: INFO: Got endpoints: latency-svc-nsqwx [799.411997ms] May 4 16:09:20.965: INFO: Created: latency-svc-np2pp May 4 16:09:21.010: INFO: Got endpoints: latency-svc-t98p4 [800.152343ms] May 4 16:09:21.015: INFO: Created: latency-svc-lfrdj May 4 16:09:21.060: INFO: Got endpoints: latency-svc-qsp8q [799.575655ms] May 4 16:09:21.066: INFO: Created: latency-svc-2bhzg May 4 16:09:21.110: INFO: Got endpoints: latency-svc-jmmvk [799.25168ms] May 4 16:09:21.115: INFO: Created: latency-svc-2nlp9 May 4 16:09:21.160: INFO: Got endpoints: latency-svc-llwfb [800.23139ms] May 4 16:09:21.165: INFO: Created: latency-svc-4lgf9 May 4 16:09:21.210: INFO: Got endpoints: latency-svc-sl5rd [799.952466ms] May 4 16:09:21.215: INFO: Created: latency-svc-jvdpc May 4 16:09:21.259: INFO: Got endpoints: latency-svc-dr5rv [799.450449ms] May 4 16:09:21.265: INFO: Created: latency-svc-9q6tq May 4 16:09:21.310: INFO: Got endpoints: latency-svc-b2dxh [800.342595ms] May 4 16:09:21.315: INFO: Created: latency-svc-kdf2t May 4 16:09:21.360: INFO: Got endpoints: latency-svc-m5gcw [800.326506ms] May 4 16:09:21.366: INFO: Created: latency-svc-rnq7k May 4 16:09:21.410: INFO: Got endpoints: latency-svc-cx9rs [800.324373ms] May 4 16:09:21.416: INFO: Created: latency-svc-q76n7 May 4 16:09:21.462: INFO: Got endpoints: latency-svc-nmdbq [752.385561ms] May 4 16:09:21.469: INFO: Created: latency-svc-5pfj6 May 4 16:09:21.510: INFO: Got endpoints: latency-svc-2k9fn [749.201511ms] May 4 16:09:21.516: INFO: Created: latency-svc-8g2ph May 4 16:09:21.560: INFO: Got endpoints: latency-svc-8wbz7 [750.312205ms] May 4 16:09:21.566: INFO: Created: latency-svc-xszkw May 4 16:09:21.610: INFO: Got endpoints: latency-svc-24nbz [750.357431ms] May 4 16:09:21.616: INFO: Created: latency-svc-n52pc May 4 16:09:21.660: INFO: Got endpoints: latency-svc-l9cm6 [750.243779ms] May 4 16:09:21.665: INFO: Created: latency-svc-p8qdt May 4 16:09:21.710: INFO: Got endpoints: latency-svc-np2pp [749.557766ms] May 4 16:09:21.715: INFO: Created: latency-svc-n9jdd May 4 16:09:21.760: INFO: Got endpoints: latency-svc-lfrdj [749.67169ms] May 4 16:09:21.768: INFO: Created: latency-svc-6wfbf May 4 16:09:21.810: INFO: Got endpoints: latency-svc-2bhzg [749.51514ms] May 4 16:09:21.815: INFO: Created: latency-svc-n7wdv May 4 16:09:21.860: INFO: Got endpoints: latency-svc-2nlp9 [750.2218ms] May 4 16:09:21.866: INFO: Created: latency-svc-6mfzd May 4 16:09:21.910: INFO: Got endpoints: latency-svc-4lgf9 [749.40978ms] May 4 16:09:21.915: INFO: Created: latency-svc-m9qtj May 4 16:09:21.960: INFO: Got endpoints: latency-svc-jvdpc [749.429201ms] May 4 16:09:21.966: INFO: Created: latency-svc-tx9fn May 4 16:09:22.010: INFO: Got endpoints: latency-svc-9q6tq [750.604685ms] May 4 16:09:22.015: INFO: Created: latency-svc-q792t May 4 16:09:22.060: INFO: Got endpoints: latency-svc-kdf2t [750.240379ms] May 4 16:09:22.066: INFO: Created: latency-svc-44x5d May 4 16:09:22.110: INFO: Got endpoints: latency-svc-rnq7k [749.466079ms] May 4 16:09:22.115: INFO: Created: latency-svc-rdrpm May 4 16:09:22.160: INFO: Got endpoints: latency-svc-q76n7 [749.840478ms] May 4 16:09:22.165: INFO: Created: latency-svc-gsrb2 May 4 16:09:22.210: INFO: Got endpoints: latency-svc-5pfj6 [747.422335ms] May 4 16:09:22.215: INFO: Created: latency-svc-886wz May 4 16:09:22.260: INFO: Got endpoints: latency-svc-8g2ph [749.576032ms] May 4 16:09:22.266: INFO: Created: latency-svc-wq2c8 May 4 16:09:22.310: INFO: Got endpoints: latency-svc-xszkw [749.731028ms] May 4 16:09:22.316: INFO: Created: latency-svc-4m7vs May 4 16:09:22.360: INFO: Got endpoints: latency-svc-n52pc [749.222253ms] May 4 16:09:22.365: INFO: Created: latency-svc-47rzf May 4 16:09:22.410: INFO: Got endpoints: latency-svc-p8qdt [749.731774ms] May 4 16:09:22.415: INFO: Created: latency-svc-cdrrf May 4 16:09:22.459: INFO: Got endpoints: latency-svc-n9jdd [749.414698ms] May 4 16:09:22.465: INFO: Created: latency-svc-4h96h May 4 16:09:22.510: INFO: Got endpoints: latency-svc-6wfbf [750.111085ms] May 4 16:09:22.516: INFO: Created: latency-svc-477lw May 4 16:09:22.564: INFO: Got endpoints: latency-svc-n7wdv [754.611829ms] May 4 16:09:22.570: INFO: Created: latency-svc-tc8cn May 4 16:09:22.610: INFO: Got endpoints: latency-svc-6mfzd [749.232243ms] May 4 16:09:22.616: INFO: Created: latency-svc-wnl4v May 4 16:09:22.660: INFO: Got endpoints: latency-svc-m9qtj [750.695342ms] May 4 16:09:22.666: INFO: Created: latency-svc-zdkn4 May 4 16:09:22.710: INFO: Got endpoints: latency-svc-tx9fn [750.084527ms] May 4 16:09:22.715: INFO: Created: latency-svc-htql6 May 4 16:09:22.760: INFO: Got endpoints: latency-svc-q792t [749.631659ms] May 4 16:09:22.766: INFO: Created: latency-svc-tcctb May 4 16:09:22.810: INFO: Got endpoints: latency-svc-44x5d [749.791958ms] May 4 16:09:22.816: INFO: Created: latency-svc-hjkvk May 4 16:09:22.860: INFO: Got endpoints: latency-svc-rdrpm [750.520595ms] May 4 16:09:22.866: INFO: Created: latency-svc-4d8mm May 4 16:09:22.910: INFO: Got endpoints: latency-svc-gsrb2 [750.093002ms] May 4 16:09:22.915: INFO: Created: latency-svc-6tfxk May 4 16:09:22.960: INFO: Got endpoints: latency-svc-886wz [749.932369ms] May 4 16:09:22.965: INFO: Created: latency-svc-h4zxq May 4 16:09:23.010: INFO: Got endpoints: latency-svc-wq2c8 [750.827567ms] May 4 16:09:23.017: INFO: Created: latency-svc-t6fpg May 4 16:09:23.061: INFO: Got endpoints: latency-svc-4m7vs [750.345539ms] May 4 16:09:23.066: INFO: Created: latency-svc-74jdh May 4 16:09:23.110: INFO: Got endpoints: latency-svc-47rzf [749.856675ms] May 4 16:09:23.115: INFO: Created: latency-svc-4pxbx May 4 16:09:23.160: INFO: Got endpoints: latency-svc-cdrrf [749.75556ms] May 4 16:09:23.166: INFO: Created: latency-svc-4p8wb May 4 16:09:23.210: INFO: Got endpoints: latency-svc-4h96h [751.120857ms] May 4 16:09:23.216: INFO: Created: latency-svc-zk8cs May 4 16:09:23.260: INFO: Got endpoints: latency-svc-477lw [749.757892ms] May 4 16:09:23.266: INFO: Created: latency-svc-kn8nj May 4 16:09:23.310: INFO: Got endpoints: latency-svc-tc8cn [745.30545ms] May 4 16:09:23.315: INFO: Created: latency-svc-8p9kx May 4 16:09:23.361: INFO: Got endpoints: latency-svc-wnl4v [750.921818ms] May 4 16:09:23.367: INFO: Created: latency-svc-t679t May 4 16:09:23.409: INFO: Got endpoints: latency-svc-zdkn4 [748.97545ms] May 4 16:09:23.414: INFO: Created: latency-svc-r4mws May 4 16:09:23.460: INFO: Got endpoints: latency-svc-htql6 [749.893362ms] May 4 16:09:23.465: INFO: Created: latency-svc-tt5gg May 4 16:09:23.510: INFO: Got endpoints: latency-svc-tcctb [749.917602ms] May 4 16:09:23.515: INFO: Created: latency-svc-8lw4n May 4 16:09:23.561: INFO: Got endpoints: latency-svc-hjkvk [751.122577ms] May 4 16:09:23.566: INFO: Created: latency-svc-255t5 May 4 16:09:23.610: INFO: Got endpoints: latency-svc-4d8mm [749.937203ms] May 4 16:09:23.616: INFO: Created: latency-svc-wg97q May 4 16:09:23.660: INFO: Got endpoints: latency-svc-6tfxk [749.800479ms] May 4 16:09:23.666: INFO: Created: latency-svc-p8lq8 May 4 16:09:23.710: INFO: Got endpoints: latency-svc-h4zxq [750.120634ms] May 4 16:09:23.715: INFO: Created: latency-svc-tf5ng May 4 16:09:23.760: INFO: Got endpoints: latency-svc-t6fpg [749.823781ms] May 4 16:09:23.766: INFO: Created: latency-svc-kt56s May 4 16:09:23.810: INFO: Got endpoints: latency-svc-74jdh [749.140302ms] May 4 16:09:23.816: INFO: Created: latency-svc-crwx6 May 4 16:09:23.860: INFO: Got endpoints: latency-svc-4pxbx [750.736708ms] May 4 16:09:23.865: INFO: Created: latency-svc-c2jk7 May 4 16:09:23.910: INFO: Got endpoints: latency-svc-4p8wb [750.128427ms] May 4 16:09:23.915: INFO: Created: latency-svc-p7vwk May 4 16:09:23.960: INFO: Got endpoints: latency-svc-zk8cs [749.290645ms] May 4 16:09:23.966: INFO: Created: latency-svc-dfg5l May 4 16:09:24.009: INFO: Got endpoints: latency-svc-kn8nj [749.686239ms] May 4 16:09:24.015: INFO: Created: latency-svc-477wv May 4 16:09:24.110: INFO: Got endpoints: latency-svc-8p9kx [800.377161ms] May 4 16:09:24.115: INFO: Created: latency-svc-6gdsd May 4 16:09:24.160: INFO: Got endpoints: latency-svc-t679t [799.742382ms] May 4 16:09:24.166: INFO: Created: latency-svc-gsx2l May 4 16:09:24.210: INFO: Got endpoints: latency-svc-r4mws [800.509503ms] May 4 16:09:24.215: INFO: Created: latency-svc-8svzc May 4 16:09:24.261: INFO: Got endpoints: latency-svc-tt5gg [800.640267ms] May 4 16:09:24.266: INFO: Created: latency-svc-w2g97 May 4 16:09:24.310: INFO: Got endpoints: latency-svc-8lw4n [800.19785ms] May 4 16:09:24.315: INFO: Created: latency-svc-wv7f2 May 4 16:09:24.360: INFO: Got endpoints: latency-svc-255t5 [798.523707ms] May 4 16:09:24.365: INFO: Created: latency-svc-w2qp9 May 4 16:09:24.410: INFO: Got endpoints: latency-svc-wg97q [799.96094ms] May 4 16:09:24.416: INFO: Created: latency-svc-s5pq9 May 4 16:09:24.460: INFO: Got endpoints: latency-svc-p8lq8 [800.127533ms] May 4 16:09:24.466: INFO: Created: latency-svc-j85fg May 4 16:09:24.510: INFO: Got endpoints: latency-svc-tf5ng [800.152844ms] May 4 16:09:24.515: INFO: Created: latency-svc-67sbl May 4 16:09:24.560: INFO: Got endpoints: latency-svc-kt56s [799.900217ms] May 4 16:09:24.566: INFO: Created: latency-svc-dmwdz May 4 16:09:24.610: INFO: Got endpoints: latency-svc-crwx6 [800.359576ms] May 4 16:09:24.615: INFO: Created: latency-svc-x7nh6 May 4 16:09:24.660: INFO: Got endpoints: latency-svc-c2jk7 [799.278382ms] May 4 16:09:24.665: INFO: Created: latency-svc-phcmp May 4 16:09:24.710: INFO: Got endpoints: latency-svc-p7vwk [800.133034ms] May 4 16:09:24.715: INFO: Created: latency-svc-f5h4x May 4 16:09:24.760: INFO: Got endpoints: latency-svc-dfg5l [800.434545ms] May 4 16:09:24.765: INFO: Created: latency-svc-r2cvf May 4 16:09:24.811: INFO: Got endpoints: latency-svc-477wv [801.149079ms] May 4 16:09:24.816: INFO: Created: latency-svc-26kqh May 4 16:09:24.860: INFO: Got endpoints: latency-svc-6gdsd [749.944071ms] May 4 16:09:24.865: INFO: Created: latency-svc-6ntdq May 4 16:09:24.910: INFO: Got endpoints: latency-svc-gsx2l [749.414375ms] May 4 16:09:24.915: INFO: Created: latency-svc-cjmnr May 4 16:09:24.970: INFO: Got endpoints: latency-svc-8svzc [759.662096ms] May 4 16:09:24.975: INFO: Created: latency-svc-bvb7j May 4 16:09:25.010: INFO: Got endpoints: latency-svc-w2g97 [749.585792ms] May 4 16:09:25.016: INFO: Created: latency-svc-4nnt5 May 4 16:09:25.060: INFO: Got endpoints: latency-svc-wv7f2 [750.084408ms] May 4 16:09:25.065: INFO: Created: latency-svc-n8qst May 4 16:09:25.110: INFO: Got endpoints: latency-svc-w2qp9 [750.339183ms] May 4 16:09:25.116: INFO: Created: latency-svc-7rbzk May 4 16:09:25.160: INFO: Got endpoints: latency-svc-s5pq9 [749.748988ms] May 4 16:09:25.165: INFO: Created: latency-svc-6bwxs May 4 16:09:25.210: INFO: Got endpoints: latency-svc-j85fg [749.579338ms] May 4 16:09:25.215: INFO: Created: latency-svc-cdk5k May 4 16:09:25.260: INFO: Got endpoints: latency-svc-67sbl [749.676718ms] May 4 16:09:25.265: INFO: Created: latency-svc-22vgg May 4 16:09:25.310: INFO: Got endpoints: latency-svc-dmwdz [749.761418ms] May 4 16:09:25.315: INFO: Created: latency-svc-gwt89 May 4 16:09:25.360: INFO: Got endpoints: latency-svc-x7nh6 [750.116934ms] May 4 16:09:25.365: INFO: Created: latency-svc-6dvwf May 4 16:09:25.410: INFO: Got endpoints: latency-svc-phcmp [750.08622ms] May 4 16:09:25.415: INFO: Created: latency-svc-sqmfj May 4 16:09:25.460: INFO: Got endpoints: latency-svc-f5h4x [749.707509ms] May 4 16:09:25.468: INFO: Created: latency-svc-48brn May 4 16:09:25.510: INFO: Got endpoints: latency-svc-r2cvf [749.365304ms] May 4 16:09:25.515: INFO: Created: latency-svc-24v2x May 4 16:09:25.560: INFO: Got endpoints: latency-svc-26kqh [749.808575ms] May 4 16:09:25.568: INFO: Created: latency-svc-965wl May 4 16:09:25.610: INFO: Got endpoints: latency-svc-6ntdq [750.128135ms] May 4 16:09:25.617: INFO: Created: latency-svc-g8wcs May 4 16:09:25.660: INFO: Got endpoints: latency-svc-cjmnr [750.015222ms] May 4 16:09:25.665: INFO: Created: latency-svc-tx2rc May 4 16:09:25.710: INFO: Got endpoints: latency-svc-bvb7j [740.308601ms] May 4 16:09:25.716: INFO: Created: latency-svc-254wm May 4 16:09:25.760: INFO: Got endpoints: latency-svc-4nnt5 [749.961471ms] May 4 16:09:25.765: INFO: Created: latency-svc-c8lhn May 4 16:09:25.810: INFO: Got endpoints: latency-svc-n8qst [750.108861ms] May 4 16:09:25.816: INFO: Created: latency-svc-nkkzk May 4 16:09:25.860: INFO: Got endpoints: latency-svc-7rbzk [750.046513ms] May 4 16:09:25.866: INFO: Created: latency-svc-6c4lx May 4 16:09:25.910: INFO: Got endpoints: latency-svc-6bwxs [750.116961ms] May 4 16:09:25.917: INFO: Created: latency-svc-cjmvm May 4 16:09:25.961: INFO: Got endpoints: latency-svc-cdk5k [750.717921ms] May 4 16:09:25.966: INFO: Created: latency-svc-m9gqx May 4 16:09:26.010: INFO: Got endpoints: latency-svc-22vgg [750.177888ms] May 4 16:09:26.015: INFO: Created: latency-svc-5vxld May 4 16:09:26.060: INFO: Got endpoints: latency-svc-gwt89 [750.274355ms] May 4 16:09:26.067: INFO: Created: latency-svc-vjldl May 4 16:09:26.110: INFO: Got endpoints: latency-svc-6dvwf [749.45083ms] May 4 16:09:26.160: INFO: Got endpoints: latency-svc-sqmfj [750.276013ms] May 4 16:09:26.210: INFO: Got endpoints: latency-svc-48brn [750.168553ms] May 4 16:09:26.260: INFO: Got endpoints: latency-svc-24v2x [749.88014ms] May 4 16:09:26.310: INFO: Got endpoints: latency-svc-965wl [749.768635ms] May 4 16:09:26.360: INFO: Got endpoints: latency-svc-g8wcs [749.276998ms] May 4 16:09:26.410: INFO: Got endpoints: latency-svc-tx2rc [749.649888ms] May 4 16:09:26.460: INFO: Got endpoints: latency-svc-254wm [750.195578ms] May 4 16:09:26.511: INFO: Got endpoints: latency-svc-c8lhn [750.272649ms] May 4 16:09:26.560: INFO: Got endpoints: latency-svc-nkkzk [749.495571ms] May 4 16:09:26.612: INFO: Got endpoints: latency-svc-6c4lx [751.939121ms] May 4 16:09:26.660: INFO: Got endpoints: latency-svc-cjmvm [749.842598ms] May 4 16:09:26.710: INFO: Got endpoints: latency-svc-m9gqx [749.388753ms] May 4 16:09:26.761: INFO: Got endpoints: latency-svc-5vxld [750.28015ms] May 4 16:09:26.810: INFO: Got endpoints: latency-svc-vjldl [750.005335ms] May 4 16:09:26.811: INFO: Latencies: [7.751941ms 11.209694ms 14.799121ms 17.898334ms 20.239857ms 22.78256ms 26.067149ms 28.736033ms 32.196033ms 34.724983ms 37.877221ms 40.334541ms 42.289965ms 42.337427ms 43.086228ms 43.281057ms 43.313932ms 43.362055ms 43.661288ms 43.903779ms 43.916898ms 44.034893ms 44.130639ms 44.170496ms 44.190186ms 44.347717ms 44.564984ms 44.727382ms 45.820581ms 46.822619ms 48.444216ms 93.093944ms 140.92511ms 190.079403ms 234.342999ms 281.13817ms 328.157657ms 376.483533ms 422.588087ms 468.869971ms 516.577595ms 563.830213ms 610.378438ms 658.222242ms 705.531183ms 740.308601ms 745.30545ms 747.422335ms 747.791362ms 748.97545ms 749.133063ms 749.140302ms 749.201511ms 749.222253ms 749.232243ms 749.276998ms 749.290645ms 749.365304ms 749.388753ms 749.40978ms 749.414375ms 749.414698ms 749.429201ms 749.45083ms 749.466079ms 749.495571ms 749.51514ms 749.557766ms 749.576032ms 749.578144ms 749.579338ms 749.585792ms 749.591157ms 749.595169ms 749.631659ms 749.634227ms 749.649888ms 749.67169ms 749.676718ms 749.686239ms 749.707509ms 749.731028ms 749.731774ms 749.748988ms 749.751986ms 749.75556ms 749.757892ms 749.761418ms 749.768635ms 749.776144ms 749.791958ms 749.800479ms 749.808575ms 749.823781ms 749.838602ms 749.840478ms 749.842598ms 749.856675ms 749.88014ms 749.893362ms 749.905322ms 749.917602ms 749.932369ms 749.937203ms 749.944071ms 749.95099ms 749.961471ms 749.994183ms 750.005335ms 750.015222ms 750.046513ms 750.084408ms 750.084527ms 750.08622ms 750.093002ms 750.108861ms 750.111085ms 750.116934ms 750.116961ms 750.120634ms 750.12287ms 750.128135ms 750.128427ms 750.150924ms 750.168553ms 750.174602ms 750.177888ms 750.195578ms 750.2218ms 750.22788ms 750.240379ms 750.243779ms 750.272649ms 750.274355ms 750.276013ms 750.28015ms 750.312205ms 750.339183ms 750.345539ms 750.357431ms 750.520595ms 750.530051ms 750.604685ms 750.695342ms 750.717921ms 750.736708ms 750.787805ms 750.827567ms 750.921818ms 751.120857ms 751.122577ms 751.939121ms 752.385561ms 754.611829ms 759.662096ms 798.523707ms 799.25168ms 799.278382ms 799.324905ms 799.411997ms 799.450449ms 799.487671ms 799.493397ms 799.507194ms 799.575655ms 799.742382ms 799.778741ms 799.819395ms 799.830905ms 799.900217ms 799.902103ms 799.937919ms 799.952466ms 799.96094ms 799.961435ms 799.98249ms 800.006844ms 800.08429ms 800.127533ms 800.133034ms 800.133395ms 800.152343ms 800.152844ms 800.19785ms 800.208022ms 800.23139ms 800.324373ms 800.326506ms 800.342595ms 800.359576ms 800.377161ms 800.434545ms 800.449755ms 800.492106ms 800.509503ms 800.578066ms 800.640267ms 800.737785ms 800.78132ms 801.149079ms] May 4 16:09:26.811: INFO: 50 %ile: 749.905322ms May 4 16:09:26.811: INFO: 90 %ile: 800.133395ms May 4 16:09:26.811: INFO: 99 %ile: 800.78132ms May 4 16:09:26.811: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:09:26.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-9577" for this suite. • [SLOW TEST:12.915 seconds] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":-1,"completed":13,"skipped":254,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:09:26.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2896.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2896.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2896.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2896.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2896.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2896.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 4 16:09:34.955: INFO: DNS probes using dns-2896/dns-test-6f24c9d9-7408-4aa6-ae56-a9219e76f7b6 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:09:34.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2896" for this suite. • [SLOW TEST:8.079 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":14,"skipped":283,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:09:16.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-0df60e33-02d1-4f6b-ba01-ea7014d1e226 in namespace container-probe-2610 May 4 16:09:20.177: INFO: Started pod liveness-0df60e33-02d1-4f6b-ba01-ea7014d1e226 in namespace container-probe-2610 STEP: checking the pod's current state and verifying that restartCount is present May 4 16:09:20.179: INFO: Initial restart count of pod liveness-0df60e33-02d1-4f6b-ba01-ea7014d1e226 is 0 May 4 16:09:42.217: INFO: Restart count of pod container-probe-2610/liveness-0df60e33-02d1-4f6b-ba01-ea7014d1e226 is now 1 (22.03793638s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:09:42.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2610" for this suite. • [SLOW TEST:26.096 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":395,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:09:42.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 4 16:09:42.270: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8f7b77b5-38a0-4bde-bf9c-1928fad5c542" in namespace "projected-693" to be "Succeeded or Failed" May 4 16:09:42.272: INFO: Pod "downwardapi-volume-8f7b77b5-38a0-4bde-bf9c-1928fad5c542": Phase="Pending", Reason="", readiness=false. Elapsed: 1.897745ms May 4 16:09:44.275: INFO: Pod "downwardapi-volume-8f7b77b5-38a0-4bde-bf9c-1928fad5c542": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004441548s May 4 16:09:46.278: INFO: Pod "downwardapi-volume-8f7b77b5-38a0-4bde-bf9c-1928fad5c542": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007802565s STEP: Saw pod success May 4 16:09:46.278: INFO: Pod "downwardapi-volume-8f7b77b5-38a0-4bde-bf9c-1928fad5c542" satisfied condition "Succeeded or Failed" May 4 16:09:46.281: INFO: Trying to get logs from node node2 pod downwardapi-volume-8f7b77b5-38a0-4bde-bf9c-1928fad5c542 container client-container: STEP: delete the pod May 4 16:09:46.298: INFO: Waiting for pod downwardapi-volume-8f7b77b5-38a0-4bde-bf9c-1928fad5c542 to disappear May 4 16:09:46.300: INFO: Pod downwardapi-volume-8f7b77b5-38a0-4bde-bf9c-1928fad5c542 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:09:46.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-693" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":396,"failed":0} SSSSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":7,"skipped":63,"failed":0} [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:06:18.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-2916 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 4 16:06:18.564: INFO: Found 0 stateful pods, waiting for 3 May 4 16:06:28.567: INFO: Found 1 stateful pods, waiting for 3 May 4 16:06:38.568: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 4 16:06:38.568: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 4 16:06:38.568: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:06:48.569: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 4 16:06:48.569: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 4 16:06:48.569: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 4 16:06:48.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2916 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 4 16:06:48.826: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 4 16:06:48.826: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 4 16:06:48.826: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 4 16:06:58.853: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 4 16:07:08.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2916 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 16:07:09.171: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 4 16:07:09.171: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 4 16:07:09.171: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 4 16:07:19.190: INFO: Waiting for StatefulSet statefulset-2916/ss2 to complete update May 4 16:07:19.190: INFO: Waiting for Pod statefulset-2916/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 4 16:07:19.190: INFO: Waiting for Pod statefulset-2916/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 4 16:07:19.190: INFO: Waiting for Pod statefulset-2916/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 4 16:07:29.196: INFO: Waiting for StatefulSet statefulset-2916/ss2 to complete update May 4 16:07:29.196: INFO: Waiting for Pod statefulset-2916/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 4 16:07:29.196: INFO: Waiting for Pod statefulset-2916/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 4 16:07:39.194: INFO: Waiting for StatefulSet statefulset-2916/ss2 to complete update May 4 16:07:39.194: INFO: Waiting for Pod statefulset-2916/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 4 16:07:49.195: INFO: Waiting for StatefulSet statefulset-2916/ss2 to complete update May 4 16:07:49.195: INFO: Waiting for Pod statefulset-2916/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision May 4 16:07:59.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2916 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 4 16:07:59.464: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 4 16:07:59.464: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 4 16:07:59.464: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 4 16:08:09.493: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 4 16:08:19.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2916 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 16:08:19.756: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 4 16:08:19.756: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 4 16:08:19.756: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 4 16:08:29.772: INFO: Waiting for StatefulSet statefulset-2916/ss2 to complete update May 4 16:08:29.772: INFO: Waiting for Pod statefulset-2916/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 4 16:08:29.772: INFO: Waiting for Pod statefulset-2916/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 4 16:08:29.772: INFO: Waiting for Pod statefulset-2916/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 4 16:08:39.779: INFO: Waiting for StatefulSet statefulset-2916/ss2 to complete update May 4 16:08:39.779: INFO: Waiting for Pod statefulset-2916/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 4 16:08:39.779: INFO: Waiting for Pod statefulset-2916/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 4 16:08:49.777: INFO: Waiting for StatefulSet statefulset-2916/ss2 to complete update May 4 16:08:49.777: INFO: Waiting for Pod statefulset-2916/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 4 16:08:59.778: INFO: Waiting for StatefulSet statefulset-2916/ss2 to complete update May 4 16:08:59.778: INFO: Waiting for Pod statefulset-2916/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 4 16:09:09.777: INFO: Deleting all statefulset in ns statefulset-2916 May 4 16:09:09.782: INFO: Scaling statefulset ss2 to 0 May 4 16:09:49.793: INFO: Waiting for statefulset status.replicas updated to 0 May 4 16:09:49.795: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:09:49.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2916" for this suite. • [SLOW TEST:211.278 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":8,"skipped":63,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:09:49.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 4 16:09:49.925: INFO: Waiting up to 5m0s for pod "downwardapi-volume-589b9669-1e96-423e-b124-cb68ab0e355e" in namespace "downward-api-8348" to be "Succeeded or Failed" May 4 16:09:49.927: INFO: Pod "downwardapi-volume-589b9669-1e96-423e-b124-cb68ab0e355e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.43586ms May 4 16:09:51.930: INFO: Pod "downwardapi-volume-589b9669-1e96-423e-b124-cb68ab0e355e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005168423s May 4 16:09:53.933: INFO: Pod "downwardapi-volume-589b9669-1e96-423e-b124-cb68ab0e355e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008730785s STEP: Saw pod success May 4 16:09:53.933: INFO: Pod "downwardapi-volume-589b9669-1e96-423e-b124-cb68ab0e355e" satisfied condition "Succeeded or Failed" May 4 16:09:53.935: INFO: Trying to get logs from node node2 pod downwardapi-volume-589b9669-1e96-423e-b124-cb68ab0e355e container client-container: STEP: delete the pod May 4 16:09:53.950: INFO: Waiting for pod downwardapi-volume-589b9669-1e96-423e-b124-cb68ab0e355e to disappear May 4 16:09:53.952: INFO: Pod downwardapi-volume-589b9669-1e96-423e-b124-cb68ab0e355e no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:09:53.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8348" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":99,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:09:01.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:10:01.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2088" for this suite. • [SLOW TEST:60.043 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:09:54.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 4 16:09:54.601: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 4 16:09:56.609: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741394, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741394, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741394, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741394, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 16:09:58.613: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741394, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741394, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741394, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741394, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 4 16:10:01.619: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:10:01.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9854" for this suite. STEP: Destroying namespace "webhook-9854-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.568 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":10,"skipped":177,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:10:01.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 4 16:10:01.764: INFO: Waiting up to 5m0s for pod "pod-9cfc711e-e02f-4ca4-981d-6ed38aea19a7" in namespace "emptydir-9936" to be "Succeeded or Failed" May 4 16:10:01.766: INFO: Pod "pod-9cfc711e-e02f-4ca4-981d-6ed38aea19a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216504ms May 4 16:10:03.769: INFO: Pod "pod-9cfc711e-e02f-4ca4-981d-6ed38aea19a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005221306s May 4 16:10:05.772: INFO: Pod "pod-9cfc711e-e02f-4ca4-981d-6ed38aea19a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008221016s STEP: Saw pod success May 4 16:10:05.772: INFO: Pod "pod-9cfc711e-e02f-4ca4-981d-6ed38aea19a7" satisfied condition "Succeeded or Failed" May 4 16:10:05.774: INFO: Trying to get logs from node node2 pod pod-9cfc711e-e02f-4ca4-981d-6ed38aea19a7 container test-container: STEP: delete the pod May 4 16:10:05.880: INFO: Waiting for pod pod-9cfc711e-e02f-4ca4-981d-6ed38aea19a7 to disappear May 4 16:10:05.884: INFO: Pod pod-9cfc711e-e02f-4ca4-981d-6ed38aea19a7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:10:05.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9936" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":194,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:10:05.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 4 16:10:05.960: INFO: Waiting up to 5m0s for pod "downwardapi-volume-55b804a5-edd5-45c1-b60c-b3395ca31583" in namespace "projected-4701" to be "Succeeded or Failed" May 4 16:10:05.963: INFO: Pod "downwardapi-volume-55b804a5-edd5-45c1-b60c-b3395ca31583": Phase="Pending", Reason="", readiness=false. Elapsed: 2.815293ms May 4 16:10:07.966: INFO: Pod "downwardapi-volume-55b804a5-edd5-45c1-b60c-b3395ca31583": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006128943s May 4 16:10:09.969: INFO: Pod "downwardapi-volume-55b804a5-edd5-45c1-b60c-b3395ca31583": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009364598s STEP: Saw pod success May 4 16:10:09.969: INFO: Pod "downwardapi-volume-55b804a5-edd5-45c1-b60c-b3395ca31583" satisfied condition "Succeeded or Failed" May 4 16:10:09.971: INFO: Trying to get logs from node node2 pod downwardapi-volume-55b804a5-edd5-45c1-b60c-b3395ca31583 container client-container: STEP: delete the pod May 4 16:10:09.984: INFO: Waiting for pod downwardapi-volume-55b804a5-edd5-45c1-b60c-b3395ca31583 to disappear May 4 16:10:09.986: INFO: Pod downwardapi-volume-55b804a5-edd5-45c1-b60c-b3395ca31583 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:10:09.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4701" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":207,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:09:18.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-169 May 4 16:09:22.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-169 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 4 16:09:23.284: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" May 4 16:09:23.284: INFO: stdout: "iptables" May 4 16:09:23.284: INFO: proxyMode: iptables May 4 16:09:23.288: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 4 16:09:23.290: INFO: Pod kube-proxy-mode-detector still exists May 4 16:09:25.291: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 4 16:09:25.294: INFO: Pod kube-proxy-mode-detector still exists May 4 16:09:27.291: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 4 16:09:27.294: INFO: Pod kube-proxy-mode-detector still exists May 4 16:09:29.291: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 4 16:09:29.294: INFO: Pod kube-proxy-mode-detector still exists May 4 16:09:31.291: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 4 16:09:31.293: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-169 STEP: creating replication controller affinity-clusterip-timeout in namespace services-169 I0504 16:09:31.303942 38 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-169, replica count: 3 I0504 16:09:34.354628 38 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0504 16:09:37.354998 38 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 4 16:09:37.360: INFO: Creating new exec pod May 4 16:09:44.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-169 exec execpod-affinityg7ng6 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' May 4 16:09:44.638: INFO: stderr: "+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n" May 4 16:09:44.638: INFO: stdout: "" May 4 16:09:44.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-169 exec execpod-affinityg7ng6 -- /bin/sh -x -c nc -zv -t -w 2 10.233.63.132 80' May 4 16:09:44.901: INFO: stderr: "+ nc -zv -t -w 2 10.233.63.132 80\nConnection to 10.233.63.132 80 port [tcp/http] succeeded!\n" May 4 16:09:44.901: INFO: stdout: "" May 4 16:09:44.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-169 exec execpod-affinityg7ng6 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.63.132:80/ ; done' May 4 16:09:45.232: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.63.132:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.63.132:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.63.132:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.63.132:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.63.132:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.63.132:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.63.132:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.63.132:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.63.132:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.63.132:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.63.132:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.63.132:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.63.132:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.63.132:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.63.132:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.63.132:80/\n" May 4 16:09:45.232: INFO: stdout: "\naffinity-clusterip-timeout-tdzpr\naffinity-clusterip-timeout-tdzpr\naffinity-clusterip-timeout-tdzpr\naffinity-clusterip-timeout-tdzpr\naffinity-clusterip-timeout-tdzpr\naffinity-clusterip-timeout-tdzpr\naffinity-clusterip-timeout-tdzpr\naffinity-clusterip-timeout-tdzpr\naffinity-clusterip-timeout-tdzpr\naffinity-clusterip-timeout-tdzpr\naffinity-clusterip-timeout-tdzpr\naffinity-clusterip-timeout-tdzpr\naffinity-clusterip-timeout-tdzpr\naffinity-clusterip-timeout-tdzpr\naffinity-clusterip-timeout-tdzpr\naffinity-clusterip-timeout-tdzpr" May 4 16:09:45.232: INFO: Received response from host: affinity-clusterip-timeout-tdzpr May 4 16:09:45.232: INFO: Received response from host: affinity-clusterip-timeout-tdzpr May 4 16:09:45.232: INFO: Received response from host: affinity-clusterip-timeout-tdzpr May 4 16:09:45.232: INFO: Received response from host: affinity-clusterip-timeout-tdzpr May 4 16:09:45.232: INFO: Received response from host: affinity-clusterip-timeout-tdzpr May 4 16:09:45.232: INFO: Received response from host: affinity-clusterip-timeout-tdzpr May 4 16:09:45.232: INFO: Received response from host: affinity-clusterip-timeout-tdzpr May 4 16:09:45.232: INFO: Received response from host: affinity-clusterip-timeout-tdzpr May 4 16:09:45.232: INFO: Received response from host: affinity-clusterip-timeout-tdzpr May 4 16:09:45.232: INFO: Received response from host: affinity-clusterip-timeout-tdzpr May 4 16:09:45.232: INFO: Received response from host: affinity-clusterip-timeout-tdzpr May 4 16:09:45.232: INFO: Received response from host: affinity-clusterip-timeout-tdzpr May 4 16:09:45.232: INFO: Received response from host: affinity-clusterip-timeout-tdzpr May 4 16:09:45.232: INFO: Received response from host: affinity-clusterip-timeout-tdzpr May 4 16:09:45.232: INFO: Received response from host: affinity-clusterip-timeout-tdzpr May 4 16:09:45.232: INFO: Received response from host: affinity-clusterip-timeout-tdzpr May 4 16:09:45.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-169 exec execpod-affinityg7ng6 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.63.132:80/' May 4 16:09:45.570: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.63.132:80/\n" May 4 16:09:45.570: INFO: stdout: "affinity-clusterip-timeout-tdzpr" May 4 16:10:00.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-169 exec execpod-affinityg7ng6 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.63.132:80/' May 4 16:10:00.851: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.63.132:80/\n" May 4 16:10:00.851: INFO: stdout: "affinity-clusterip-timeout-x5tx2" May 4 16:10:00.851: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-169, will wait for the garbage collector to delete the pods May 4 16:10:00.918: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 5.791666ms May 4 16:10:01.618: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 700.413582ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:10:10.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-169" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:51.082 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":20,"skipped":569,"failed":0} SSS ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:06:09.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-2eec7a00-c1bb-43a0-8c2e-0a8c35203695 in namespace container-probe-7921 May 4 16:06:15.061: INFO: Started pod liveness-2eec7a00-c1bb-43a0-8c2e-0a8c35203695 in namespace container-probe-7921 STEP: checking the pod's current state and verifying that restartCount is present May 4 16:06:15.064: INFO: Initial restart count of pod liveness-2eec7a00-c1bb-43a0-8c2e-0a8c35203695 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:10:15.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7921" for this suite. • [SLOW TEST:246.488 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":112,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:10:15.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-27da7d02-8709-4d4c-ba95-e33da2e88016 [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:10:15.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2537" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":7,"skipped":125,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:10:15.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-3d6dda04-164c-42ec-8f89-1789fb992794 STEP: Creating a pod to test consume configMaps May 4 16:10:15.684: INFO: Waiting up to 5m0s for pod "pod-configmaps-41c7593c-0d5a-47af-9262-82cb221928ec" in namespace "configmap-9942" to be "Succeeded or Failed" May 4 16:10:15.686: INFO: Pod "pod-configmaps-41c7593c-0d5a-47af-9262-82cb221928ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070675ms May 4 16:10:17.689: INFO: Pod "pod-configmaps-41c7593c-0d5a-47af-9262-82cb221928ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00552864s May 4 16:10:19.694: INFO: Pod "pod-configmaps-41c7593c-0d5a-47af-9262-82cb221928ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009861968s STEP: Saw pod success May 4 16:10:19.694: INFO: Pod "pod-configmaps-41c7593c-0d5a-47af-9262-82cb221928ec" satisfied condition "Succeeded or Failed" May 4 16:10:19.696: INFO: Trying to get logs from node node2 pod pod-configmaps-41c7593c-0d5a-47af-9262-82cb221928ec container configmap-volume-test: STEP: delete the pod May 4 16:10:19.709: INFO: Waiting for pod pod-configmaps-41c7593c-0d5a-47af-9262-82cb221928ec to disappear May 4 16:10:19.711: INFO: Pod pod-configmaps-41c7593c-0d5a-47af-9262-82cb221928ec no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:10:19.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9942" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":161,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:10:10.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 4 16:10:16.581: INFO: Successfully updated pod "labelsupdate88e9c68f-73d0-444b-947b-a387966c10b2" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:10:20.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5911" for this suite. • [SLOW TEST:10.586 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":219,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:07:56.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-1956 May 4 16:08:00.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 4 16:08:00.527: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" May 4 16:08:00.528: INFO: stdout: "iptables" May 4 16:08:00.528: INFO: proxyMode: iptables May 4 16:08:00.534: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 4 16:08:00.536: INFO: Pod kube-proxy-mode-detector still exists May 4 16:08:02.537: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 4 16:08:02.539: INFO: Pod kube-proxy-mode-detector still exists May 4 16:08:04.537: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 4 16:08:04.540: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-1956 STEP: creating replication controller affinity-nodeport-timeout in namespace services-1956 I0504 16:08:04.552261 28 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-1956, replica count: 3 I0504 16:08:07.602961 28 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0504 16:08:10.604464 28 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 4 16:08:10.615: INFO: Creating new exec pod May 4 16:08:15.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' May 4 16:08:15.917: INFO: stderr: "+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" May 4 16:08:15.917: INFO: stdout: "" May 4 16:08:15.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.233.16.203 80' May 4 16:08:16.174: INFO: stderr: "+ nc -zv -t -w 2 10.233.16.203 80\nConnection to 10.233.16.203 80 port [tcp/http] succeeded!\n" May 4 16:08:16.174: INFO: stdout: "" May 4 16:08:16.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:16.430: INFO: rc: 1 May 4 16:08:16.431: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:17.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:17.712: INFO: rc: 1 May 4 16:08:17.712: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:18.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:18.710: INFO: rc: 1 May 4 16:08:18.710: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:19.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:19.675: INFO: rc: 1 May 4 16:08:19.675: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:20.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:20.839: INFO: rc: 1 May 4 16:08:20.839: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:21.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:21.690: INFO: rc: 1 May 4 16:08:21.691: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:22.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:22.697: INFO: rc: 1 May 4 16:08:22.697: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:23.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:23.698: INFO: rc: 1 May 4 16:08:23.698: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:24.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:24.816: INFO: rc: 1 May 4 16:08:24.816: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:25.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:25.711: INFO: rc: 1 May 4 16:08:25.711: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:26.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:26.704: INFO: rc: 1 May 4 16:08:26.704: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:27.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:27.672: INFO: rc: 1 May 4 16:08:27.672: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:28.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:28.686: INFO: rc: 1 May 4 16:08:28.686: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:29.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:29.705: INFO: rc: 1 May 4 16:08:29.705: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:30.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:30.684: INFO: rc: 1 May 4 16:08:30.684: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:31.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:31.701: INFO: rc: 1 May 4 16:08:31.701: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:32.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:32.951: INFO: rc: 1 May 4 16:08:32.951: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:33.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:33.703: INFO: rc: 1 May 4 16:08:33.703: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:34.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:34.686: INFO: rc: 1 May 4 16:08:34.686: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:35.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:35.683: INFO: rc: 1 May 4 16:08:35.683: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:36.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:36.730: INFO: rc: 1 May 4 16:08:36.730: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:37.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:37.888: INFO: rc: 1 May 4 16:08:37.888: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:38.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:38.676: INFO: rc: 1 May 4 16:08:38.676: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:39.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:40.055: INFO: rc: 1 May 4 16:08:40.055: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:40.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:40.680: INFO: rc: 1 May 4 16:08:40.680: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:41.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:41.877: INFO: rc: 1 May 4 16:08:41.877: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:42.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:42.702: INFO: rc: 1 May 4 16:08:42.702: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:43.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:43.720: INFO: rc: 1 May 4 16:08:43.720: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:44.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:44.669: INFO: rc: 1 May 4 16:08:44.670: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:45.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:45.701: INFO: rc: 1 May 4 16:08:45.701: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:46.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:46.700: INFO: rc: 1 May 4 16:08:46.700: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:47.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:47.704: INFO: rc: 1 May 4 16:08:47.704: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:48.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:48.688: INFO: rc: 1 May 4 16:08:48.688: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:49.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:49.696: INFO: rc: 1 May 4 16:08:49.696: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:50.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:50.691: INFO: rc: 1 May 4 16:08:50.691: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:51.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:51.690: INFO: rc: 1 May 4 16:08:51.690: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:52.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:52.670: INFO: rc: 1 May 4 16:08:52.670: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:53.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:53.725: INFO: rc: 1 May 4 16:08:53.725: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:54.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:54.719: INFO: rc: 1 May 4 16:08:54.719: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:55.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:55.760: INFO: rc: 1 May 4 16:08:55.760: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:56.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:56.682: INFO: rc: 1 May 4 16:08:56.682: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:57.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:57.684: INFO: rc: 1 May 4 16:08:57.684: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:58.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:59.109: INFO: rc: 1 May 4 16:08:59.109: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:08:59.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:08:59.951: INFO: rc: 1 May 4 16:08:59.951: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:00.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:00.729: INFO: rc: 1 May 4 16:09:00.729: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:01.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:01.706: INFO: rc: 1 May 4 16:09:01.706: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:02.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:02.993: INFO: rc: 1 May 4 16:09:02.993: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:03.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:03.860: INFO: rc: 1 May 4 16:09:03.860: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:04.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:04.728: INFO: rc: 1 May 4 16:09:04.728: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:05.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:05.674: INFO: rc: 1 May 4 16:09:05.674: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:06.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:07.213: INFO: rc: 1 May 4 16:09:07.213: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:07.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:07.727: INFO: rc: 1 May 4 16:09:07.727: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:08.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:08.739: INFO: rc: 1 May 4 16:09:08.739: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:09.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:09.776: INFO: rc: 1 May 4 16:09:09.777: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:10.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:10.906: INFO: rc: 1 May 4 16:09:10.906: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:11.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:11.740: INFO: rc: 1 May 4 16:09:11.740: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:12.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:12.700: INFO: rc: 1 May 4 16:09:12.700: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:13.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:13.819: INFO: rc: 1 May 4 16:09:13.820: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:14.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:15.152: INFO: rc: 1 May 4 16:09:15.152: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:15.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:15.679: INFO: rc: 1 May 4 16:09:15.679: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:16.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:16.697: INFO: rc: 1 May 4 16:09:16.697: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:17.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:17.704: INFO: rc: 1 May 4 16:09:17.704: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:18.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:18.690: INFO: rc: 1 May 4 16:09:18.690: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:19.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:19.724: INFO: rc: 1 May 4 16:09:19.724: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:20.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:20.701: INFO: rc: 1 May 4 16:09:20.701: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:21.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:21.753: INFO: rc: 1 May 4 16:09:21.753: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:22.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:22.667: INFO: rc: 1 May 4 16:09:22.667: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:23.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:23.696: INFO: rc: 1 May 4 16:09:23.696: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:24.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:24.725: INFO: rc: 1 May 4 16:09:24.725: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:25.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:25.772: INFO: rc: 1 May 4 16:09:25.772: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:26.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:26.762: INFO: rc: 1 May 4 16:09:26.762: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:27.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:27.966: INFO: rc: 1 May 4 16:09:27.966: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:28.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:29.066: INFO: rc: 1 May 4 16:09:29.066: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:29.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:29.680: INFO: rc: 1 May 4 16:09:29.680: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:30.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:30.783: INFO: rc: 1 May 4 16:09:30.784: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:31.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:31.721: INFO: rc: 1 May 4 16:09:31.721: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:32.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:32.977: INFO: rc: 1 May 4 16:09:32.977: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:33.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:33.783: INFO: rc: 1 May 4 16:09:33.783: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:34.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:34.697: INFO: rc: 1 May 4 16:09:34.697: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:35.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:35.674: INFO: rc: 1 May 4 16:09:35.674: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:36.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:36.673: INFO: rc: 1 May 4 16:09:36.673: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:37.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:37.825: INFO: rc: 1 May 4 16:09:37.825: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:38.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:38.692: INFO: rc: 1 May 4 16:09:38.692: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:39.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:39.675: INFO: rc: 1 May 4 16:09:39.675: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:40.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:40.672: INFO: rc: 1 May 4 16:09:40.672: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:41.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:41.694: INFO: rc: 1 May 4 16:09:41.694: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:42.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:42.798: INFO: rc: 1 May 4 16:09:42.798: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:43.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:43.707: INFO: rc: 1 May 4 16:09:43.707: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:44.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:44.706: INFO: rc: 1 May 4 16:09:44.706: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:45.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:45.669: INFO: rc: 1 May 4 16:09:45.669: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:46.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:46.696: INFO: rc: 1 May 4 16:09:46.696: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:47.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:47.812: INFO: rc: 1 May 4 16:09:47.812: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:48.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:48.775: INFO: rc: 1 May 4 16:09:48.775: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:49.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:49.696: INFO: rc: 1 May 4 16:09:49.696: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:50.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:50.674: INFO: rc: 1 May 4 16:09:50.674: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:51.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:51.704: INFO: rc: 1 May 4 16:09:51.704: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:52.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:52.701: INFO: rc: 1 May 4 16:09:52.701: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:53.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:53.703: INFO: rc: 1 May 4 16:09:53.703: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:54.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:54.760: INFO: rc: 1 May 4 16:09:54.760: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:55.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:55.716: INFO: rc: 1 May 4 16:09:55.716: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:56.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:56.695: INFO: rc: 1 May 4 16:09:56.695: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:57.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:57.699: INFO: rc: 1 May 4 16:09:57.699: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:58.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:58.717: INFO: rc: 1 May 4 16:09:58.717: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:59.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:09:59.704: INFO: rc: 1 May 4 16:09:59.704: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:00.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:10:00.718: INFO: rc: 1 May 4 16:10:00.718: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:01.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:10:01.685: INFO: rc: 1 May 4 16:10:01.685: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:02.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:10:02.707: INFO: rc: 1 May 4 16:10:02.707: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:03.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:10:03.759: INFO: rc: 1 May 4 16:10:03.759: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:04.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:10:04.708: INFO: rc: 1 May 4 16:10:04.708: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:05.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:10:05.695: INFO: rc: 1 May 4 16:10:05.695: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:06.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:10:06.724: INFO: rc: 1 May 4 16:10:06.724: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:07.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:10:07.690: INFO: rc: 1 May 4 16:10:07.691: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:08.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:10:08.695: INFO: rc: 1 May 4 16:10:08.695: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:09.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:10:09.696: INFO: rc: 1 May 4 16:10:09.696: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:10.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:10:10.824: INFO: rc: 1 May 4 16:10:10.824: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:11.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:10:11.730: INFO: rc: 1 May 4 16:10:11.730: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:12.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:10:12.676: INFO: rc: 1 May 4 16:10:12.676: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:13.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:10:13.713: INFO: rc: 1 May 4 16:10:13.713: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:14.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:10:14.680: INFO: rc: 1 May 4 16:10:14.680: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:15.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:10:15.704: INFO: rc: 1 May 4 16:10:15.704: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:16.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:10:16.721: INFO: rc: 1 May 4 16:10:16.721: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:16.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884' May 4 16:10:16.961: INFO: rc: 1 May 4 16:10:16.961: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1956 exec execpod-affinityz74hp -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31884: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31884 nc: connect to 10.10.190.207 port 31884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:16.962: FAIL: Unexpected error: <*errors.errorString | 0xc0073b6d10>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31884 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31884 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForSessionAffinityTimeout(0xc001a78000, 0x54075e0, 0xc001b03e40, 0xc00049fb00) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3444 +0x751 k8s.io/kubernetes/test/e2e/network.glob..func24.29() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2525 +0x9c k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0015fcd80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc0015fcd80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc0015fcd80, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 May 4 16:10:16.963: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-1956, will wait for the garbage collector to delete the pods May 4 16:10:17.037: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 4.873228ms May 4 16:10:17.138: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 100.477092ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "services-1956". STEP: Found 38 events. May 4 16:10:29.955: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-timeout-ksvxx: { } Scheduled: Successfully assigned services-1956/affinity-nodeport-timeout-ksvxx to node1 May 4 16:10:29.955: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-timeout-l62pm: { } Scheduled: Successfully assigned services-1956/affinity-nodeport-timeout-l62pm to node2 May 4 16:10:29.955: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-timeout-ncbmb: { } Scheduled: Successfully assigned services-1956/affinity-nodeport-timeout-ncbmb to node1 May 4 16:10:29.955: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod-affinityz74hp: { } Scheduled: Successfully assigned services-1956/execpod-affinityz74hp to node1 May 4 16:10:29.955: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for kube-proxy-mode-detector: { } Scheduled: Successfully assigned services-1956/kube-proxy-mode-detector to node2 May 4 16:10:29.955: INFO: At 2021-05-04 16:07:57 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.20" May 4 16:10:29.955: INFO: At 2021-05-04 16:07:58 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.20" in 1.231650499s May 4 16:10:29.955: INFO: At 2021-05-04 16:07:58 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Created: Created container detector May 4 16:10:29.955: INFO: At 2021-05-04 16:07:59 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Started: Started container detector May 4 16:10:29.955: INFO: At 2021-05-04 16:08:02 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Killing: Stopping container detector May 4 16:10:29.955: INFO: At 2021-05-04 16:08:04 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-ncbmb May 4 16:10:29.955: INFO: At 2021-05-04 16:08:04 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-ksvxx May 4 16:10:29.955: INFO: At 2021-05-04 16:08:04 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-l62pm May 4 16:10:29.955: INFO: At 2021-05-04 16:08:06 +0000 UTC - event for affinity-nodeport-timeout-ksvxx: {multus } AddedInterface: Add eth0 [10.244.4.113/24] May 4 16:10:29.955: INFO: At 2021-05-04 16:08:06 +0000 UTC - event for affinity-nodeport-timeout-ksvxx: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.20" May 4 16:10:29.955: INFO: At 2021-05-04 16:08:06 +0000 UTC - event for affinity-nodeport-timeout-l62pm: {kubelet node2} Created: Created container affinity-nodeport-timeout May 4 16:10:29.955: INFO: At 2021-05-04 16:08:06 +0000 UTC - event for affinity-nodeport-timeout-l62pm: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.20" in 456.566011ms May 4 16:10:29.955: INFO: At 2021-05-04 16:08:06 +0000 UTC - event for affinity-nodeport-timeout-l62pm: {kubelet node2} Started: Started container affinity-nodeport-timeout May 4 16:10:29.955: INFO: At 2021-05-04 16:08:06 +0000 UTC - event for affinity-nodeport-timeout-l62pm: {multus } AddedInterface: Add eth0 [10.244.3.145/24] May 4 16:10:29.955: INFO: At 2021-05-04 16:08:06 +0000 UTC - event for affinity-nodeport-timeout-l62pm: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.20" May 4 16:10:29.955: INFO: At 2021-05-04 16:08:06 +0000 UTC - event for affinity-nodeport-timeout-ncbmb: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.20" May 4 16:10:29.955: INFO: At 2021-05-04 16:08:06 +0000 UTC - event for affinity-nodeport-timeout-ncbmb: {multus } AddedInterface: Add eth0 [10.244.4.114/24] May 4 16:10:29.955: INFO: At 2021-05-04 16:08:07 +0000 UTC - event for affinity-nodeport-timeout-ksvxx: {kubelet node1} Created: Created container affinity-nodeport-timeout May 4 16:10:29.955: INFO: At 2021-05-04 16:08:07 +0000 UTC - event for affinity-nodeport-timeout-ksvxx: {kubelet node1} Started: Started container affinity-nodeport-timeout May 4 16:10:29.955: INFO: At 2021-05-04 16:08:07 +0000 UTC - event for affinity-nodeport-timeout-ksvxx: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.20" in 559.694334ms May 4 16:10:29.955: INFO: At 2021-05-04 16:08:07 +0000 UTC - event for affinity-nodeport-timeout-ncbmb: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.20" in 758.485475ms May 4 16:10:29.955: INFO: At 2021-05-04 16:08:07 +0000 UTC - event for affinity-nodeport-timeout-ncbmb: {kubelet node1} Created: Created container affinity-nodeport-timeout May 4 16:10:29.955: INFO: At 2021-05-04 16:08:07 +0000 UTC - event for affinity-nodeport-timeout-ncbmb: {kubelet node1} Started: Started container affinity-nodeport-timeout May 4 16:10:29.955: INFO: At 2021-05-04 16:08:12 +0000 UTC - event for execpod-affinityz74hp: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.20" in 478.21199ms May 4 16:10:29.955: INFO: At 2021-05-04 16:08:12 +0000 UTC - event for execpod-affinityz74hp: {kubelet node1} Created: Created container agnhost-container May 4 16:10:29.955: INFO: At 2021-05-04 16:08:12 +0000 UTC - event for execpod-affinityz74hp: {kubelet node1} Started: Started container agnhost-container May 4 16:10:29.955: INFO: At 2021-05-04 16:08:12 +0000 UTC - event for execpod-affinityz74hp: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.20" May 4 16:10:29.955: INFO: At 2021-05-04 16:08:12 +0000 UTC - event for execpod-affinityz74hp: {multus } AddedInterface: Add eth0 [10.244.4.115/24] May 4 16:10:29.955: INFO: At 2021-05-04 16:10:16 +0000 UTC - event for execpod-affinityz74hp: {kubelet node1} Killing: Stopping container agnhost-container May 4 16:10:29.955: INFO: At 2021-05-04 16:10:17 +0000 UTC - event for affinity-nodeport-timeout: {endpoint-controller } FailedToUpdateEndpoint: Failed to update endpoint services-1956/affinity-nodeport-timeout: Operation cannot be fulfilled on endpoints "affinity-nodeport-timeout": the object has been modified; please apply your changes to the latest version and try again May 4 16:10:29.955: INFO: At 2021-05-04 16:10:17 +0000 UTC - event for affinity-nodeport-timeout-ksvxx: {kubelet node1} Killing: Stopping container affinity-nodeport-timeout May 4 16:10:29.955: INFO: At 2021-05-04 16:10:17 +0000 UTC - event for affinity-nodeport-timeout-l62pm: {kubelet node2} Killing: Stopping container affinity-nodeport-timeout May 4 16:10:29.955: INFO: At 2021-05-04 16:10:17 +0000 UTC - event for affinity-nodeport-timeout-ncbmb: {kubelet node1} Killing: Stopping container affinity-nodeport-timeout May 4 16:10:29.957: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:10:29.957: INFO: May 4 16:10:29.961: INFO: Logging node info for node master1 May 4 16:10:29.963: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 db982204-549e-4532-90a7-a4410878cfc9 33739 0 2021-05-04 14:43:01 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"3e:f0:43:cb:66:52"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-04 14:51:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:46 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:10:26 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:10:26 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:10:26 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:10:26 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:88a0771919594d4187f6704fc7592bf8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8e0a253b-2aa4-4467-879e-567e7ba1ffa4,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:10:29.963: INFO: Logging kubelet events for node master1 May 4 16:10:29.966: INFO: Logging pods the kubelet thinks is on node master1 May 4 16:10:29.985: INFO: kube-flannel-qspzk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:10:29.986: INFO: Init container install-cni ready: true, restart count 0 May 4 16:10:29.986: INFO: Container kube-flannel ready: true, restart count 3 May 4 16:10:29.986: INFO: kube-multus-ds-amd64-jflvf started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:10:29.986: INFO: Container kube-multus ready: true, restart count 1 May 4 16:10:29.986: INFO: coredns-7677f9bb54-qvcd2 started at 2021-05-04 14:46:11 +0000 UTC (0+1 container statuses recorded) May 4 16:10:29.986: INFO: Container coredns ready: true, restart count 1 May 4 16:10:29.986: INFO: node-feature-discovery-controller-5bf5c49849-72rn6 started at 2021-05-04 14:51:52 +0000 UTC (0+1 container statuses recorded) May 4 16:10:29.986: INFO: Container nfd-controller ready: true, restart count 0 May 4 16:10:29.986: INFO: kube-scheduler-master1 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:10:29.986: INFO: Container kube-scheduler ready: true, restart count 0 May 4 16:10:29.986: INFO: kube-apiserver-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:10:29.986: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:10:29.986: INFO: kube-controller-manager-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:10:29.986: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:10:29.986: INFO: kube-proxy-8j6ch started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:10:29.986: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:10:29.986: INFO: docker-registry-docker-registry-56cbc7bc58-zhf8t started at 2021-05-04 14:48:42 +0000 UTC (0+2 container statuses recorded) May 4 16:10:29.986: INFO: Container docker-registry ready: true, restart count 0 May 4 16:10:29.986: INFO: Container nginx ready: true, restart count 0 May 4 16:10:29.986: INFO: node-exporter-jckjs started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:10:29.986: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:10:29.986: INFO: Container node-exporter ready: true, restart count 0 W0504 16:10:29.999005 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:10:30.029: INFO: Latency metrics for node master1 May 4 16:10:30.029: INFO: Logging node info for node master2 May 4 16:10:30.032: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 e2c15170-247b-4e7b-b818-abc807948bf8 33733 0 2021-05-04 14:43:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:e0:10:a0:e0:62"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:32 +0000 UTC,LastTransitionTime:2021-05-04 14:47:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:10:26 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:10:26 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:10:26 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:10:26 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6af568f56589422a9bd68e0270ce0f8c,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:bf27bb77-fad2-4b52-85c3-acb5113fc512,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:10:30.033: INFO: Logging kubelet events for node master2 May 4 16:10:30.035: INFO: Logging pods the kubelet thinks is on node master2 May 4 16:10:30.043: INFO: kube-controller-manager-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.043: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:10:30.043: INFO: kube-scheduler-master2 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.043: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:10:30.043: INFO: kube-proxy-6b5t8 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.043: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:10:30.043: INFO: kube-flannel-cxdfr started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:10:30.043: INFO: Init container install-cni ready: true, restart count 0 May 4 16:10:30.043: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:10:30.043: INFO: kube-multus-ds-amd64-dw8tg started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.043: INFO: Container kube-multus ready: true, restart count 1 May 4 16:10:30.043: INFO: dns-autoscaler-5b7b5c9b6f-zbrsq started at 2021-05-04 14:46:08 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.043: INFO: Container autoscaler ready: true, restart count 1 May 4 16:10:30.043: INFO: node-exporter-9c6qf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:10:30.043: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:10:30.043: INFO: Container node-exporter ready: true, restart count 0 May 4 16:10:30.043: INFO: kube-apiserver-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.043: INFO: Container kube-apiserver ready: true, restart count 0 W0504 16:10:30.055184 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:10:30.077: INFO: Latency metrics for node master2 May 4 16:10:30.078: INFO: Logging node info for node master3 May 4 16:10:30.080: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 b533a646-667e-403c-944b-71dec9cc4851 33731 0 2021-05-04 14:43:51 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:b0:53:14:f6:c9"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:48:50 +0000 UTC,LastTransitionTime:2021-05-04 14:48:50 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:10:26 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:10:26 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:10:26 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:10:26 +0000 UTC,LastTransitionTime:2021-05-04 14:45:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:959373dcda56494486f0c2bb0bb496cc,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:0714ca81-c21e-40d6-a288-48d597238e54,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:10:30.081: INFO: Logging kubelet events for node master3 May 4 16:10:30.082: INFO: Logging pods the kubelet thinks is on node master3 May 4 16:10:30.090: INFO: kube-controller-manager-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.090: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:10:30.090: INFO: kube-scheduler-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.090: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:10:30.090: INFO: kube-proxy-2p5b6 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.090: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:10:30.090: INFO: kube-flannel-wznt8 started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:10:30.090: INFO: Init container install-cni ready: true, restart count 0 May 4 16:10:30.090: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:10:30.090: INFO: kube-multus-ds-amd64-cgwz2 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.090: INFO: Container kube-multus ready: true, restart count 1 May 4 16:10:30.090: INFO: coredns-7677f9bb54-pshfb started at 2021-05-04 14:46:06 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.090: INFO: Container coredns ready: true, restart count 1 May 4 16:10:30.090: INFO: node-exporter-wvppn started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:10:30.090: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:10:30.090: INFO: Container node-exporter ready: true, restart count 0 May 4 16:10:30.090: INFO: kube-apiserver-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.090: INFO: Container kube-apiserver ready: true, restart count 0 W0504 16:10:30.104668 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:10:30.134: INFO: Latency metrics for node master3 May 4 16:10:30.134: INFO: Logging node info for node node1 May 4 16:10:30.137: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 b8cf7e16-d5c7-4e2c-996a-93d93bd4fa1c 33755 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"e2:50:df:03:d2:13"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:54:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:10:28 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:10:28 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:10:28 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:10:28 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bffc023a4ab84df0b0181bc7b8f509e2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:dc08af72-abca-4f1d-bd0f-0e8d8eb97de5,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[@ :],SizeBytes:1002569035,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:371dc6bf7e0c7ce112a29341b000c40d840aef1dbb4fdcb3ae5c0597e28f3061 golang:alpine3.12],SizeBytes:301097267,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:10:30.137: INFO: Logging kubelet events for node node1 May 4 16:10:30.139: INFO: Logging pods the kubelet thinks is on node node1 May 4 16:10:30.154: INFO: cmk-init-discover-node1-m8vvw started at 2021-05-04 14:54:32 +0000 UTC (0+3 container statuses recorded) May 4 16:10:30.154: INFO: Container discover ready: false, restart count 0 May 4 16:10:30.154: INFO: Container init ready: false, restart count 0 May 4 16:10:30.154: INFO: Container install ready: false, restart count 0 May 4 16:10:30.154: INFO: downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b started at 2021-05-04 16:09:46 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.154: INFO: Container dapi-container ready: false, restart count 0 May 4 16:10:30.154: INFO: simpletest.deployment-7f7555f8bc-dnbdp started at 2021-05-04 16:10:20 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.154: INFO: Container nginx ready: false, restart count 0 May 4 16:10:30.154: INFO: kube-multus-ds-amd64-pkmbz started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.154: INFO: Container kube-multus ready: true, restart count 1 May 4 16:10:30.154: INFO: ss2-0 started at 2021-05-04 16:09:26 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.154: INFO: Container webserver ready: false, restart count 0 May 4 16:10:30.154: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.154: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 16:10:30.154: INFO: cmk-slg76 started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:10:30.154: INFO: Container nodereport ready: true, restart count 0 May 4 16:10:30.154: INFO: Container reconcile ready: true, restart count 0 May 4 16:10:30.154: INFO: prometheus-k8s-0 started at 2021-05-04 14:56:12 +0000 UTC (0+5 container statuses recorded) May 4 16:10:30.154: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:10:30.154: INFO: Container grafana ready: true, restart count 0 May 4 16:10:30.154: INFO: Container prometheus ready: true, restart count 1 May 4 16:10:30.155: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:10:30.155: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 4 16:10:30.155: INFO: affinity-clusterip-transition-595q4 started at 2021-05-04 16:10:10 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.155: INFO: Container affinity-clusterip-transition ready: false, restart count 0 May 4 16:10:30.155: INFO: kube-flannel-d6pbl started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:10:30.155: INFO: Init container install-cni ready: true, restart count 2 May 4 16:10:30.155: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:10:30.155: INFO: node-feature-discovery-worker-wfgl5 started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.155: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:10:30.155: INFO: prometheus-operator-5bb8cb9d8f-rrrhf started at 2021-05-04 14:56:03 +0000 UTC (0+2 container statuses recorded) May 4 16:10:30.155: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:10:30.155: INFO: Container prometheus-operator ready: true, restart count 0 May 4 16:10:30.155: INFO: node-exporter-k8qd9 started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:10:30.155: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:10:30.155: INFO: Container node-exporter ready: true, restart count 0 May 4 16:10:30.155: INFO: affinity-nodeport-q859k started at 2021-05-04 16:08:57 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.155: INFO: Container affinity-nodeport ready: true, restart count 0 May 4 16:10:30.155: INFO: collectd-4755t started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:10:30.155: INFO: Container collectd ready: true, restart count 0 May 4 16:10:30.155: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:10:30.155: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:10:30.155: INFO: pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40 started at 2021-05-04 16:09:08 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.155: INFO: Container env-test ready: false, restart count 0 May 4 16:10:30.155: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.155: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:10:30.155: INFO: nginx-proxy-node1 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.155: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:10:30.155: INFO: kube-proxy-t2mbn started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.155: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:10:30.155: INFO: liveness-http started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.155: INFO: Container liveness-http ready: false, restart count 15 W0504 16:10:30.169402 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:10:30.199: INFO: Latency metrics for node node1 May 4 16:10:30.199: INFO: Logging node info for node node2 May 4 16:10:30.202: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 41567fa7-bb24-4381-9387-e4115195037d 33756 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:98:f5:3b:98:5c"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:10:28 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:10:28 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:10:28 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:10:28 +0000 UTC,LastTransitionTime:2021-05-04 14:45:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d47a1c7ab17f44f2ae7ff788700a8d74,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:df3523a1-a74f-4f8b-beb1-29f5ed8699f3,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:10:30.202: INFO: Logging kubelet events for node node2 May 4 16:10:30.204: INFO: Logging pods the kubelet thinks is on node node2 May 4 16:10:30.219: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.219: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 4 16:10:30.219: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.219: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:10:30.219: INFO: cmk-2fmbx started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:10:30.219: INFO: Container nodereport ready: true, restart count 0 May 4 16:10:30.219: INFO: Container reconcile ready: true, restart count 0 May 4 16:10:30.219: INFO: affinity-nodeport-vjvq8 started at 2021-05-04 16:08:57 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.219: INFO: Container affinity-nodeport ready: true, restart count 0 May 4 16:10:30.219: INFO: downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30 started at 2021-05-04 16:10:01 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.219: INFO: Container dapi-container ready: false, restart count 0 May 4 16:10:30.219: INFO: kube-flannel-lnwkk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:10:30.219: INFO: Init container install-cni ready: true, restart count 2 May 4 16:10:30.219: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:10:30.219: INFO: cmk-webhook-6c9d5f8578-fr595 started at 2021-05-04 14:55:15 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.219: INFO: Container cmk-webhook ready: true, restart count 0 May 4 16:10:30.219: INFO: simpletest.deployment-7f7555f8bc-sbx99 started at 2021-05-04 16:10:20 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.219: INFO: Container nginx ready: false, restart count 0 May 4 16:10:30.219: INFO: execpod-affinityl8j2v started at 2021-05-04 16:09:03 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.219: INFO: Container agnhost-container ready: true, restart count 0 May 4 16:10:30.219: INFO: ss2-1 started at 2021-05-04 16:08:40 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.219: INFO: Container webserver ready: true, restart count 0 May 4 16:10:30.219: INFO: kube-proxy-rfjjf started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.219: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:10:30.219: INFO: node-feature-discovery-worker-jzjqs started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.219: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:10:30.219: INFO: node-exporter-5lghf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:10:30.219: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:10:30.219: INFO: Container node-exporter ready: true, restart count 0 May 4 16:10:30.219: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x started at 2021-05-04 14:59:02 +0000 UTC (0+2 container statuses recorded) May 4 16:10:30.219: INFO: Container tas-controller ready: true, restart count 0 May 4 16:10:30.219: INFO: Container tas-extender ready: true, restart count 0 May 4 16:10:30.219: INFO: affinity-nodeport-tmr9l started at 2021-05-04 16:08:58 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.219: INFO: Container affinity-nodeport ready: true, restart count 0 May 4 16:10:30.219: INFO: liveness-exec started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.219: INFO: Container liveness-exec ready: false, restart count 6 May 4 16:10:30.219: INFO: kube-multus-ds-amd64-7r2s4 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.219: INFO: Container kube-multus ready: true, restart count 1 May 4 16:10:30.219: INFO: cmk-init-discover-node2-zlxzj started at 2021-05-04 14:54:52 +0000 UTC (0+3 container statuses recorded) May 4 16:10:30.219: INFO: Container discover ready: false, restart count 0 May 4 16:10:30.219: INFO: Container init ready: false, restart count 0 May 4 16:10:30.219: INFO: Container install ready: false, restart count 0 May 4 16:10:30.219: INFO: collectd-dhwfp started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:10:30.219: INFO: Container collectd ready: true, restart count 0 May 4 16:10:30.219: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:10:30.219: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:10:30.219: INFO: affinity-clusterip-transition-bjq6n started at 2021-05-04 16:10:10 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.219: INFO: Container affinity-clusterip-transition ready: true, restart count 0 May 4 16:10:30.219: INFO: pod-projected-configmaps-3004a8a9-eea2-45b0-8fae-811f8b066f6b started at 2021-05-04 16:10:19 +0000 UTC (0+3 container statuses recorded) May 4 16:10:30.219: INFO: Container createcm-volume-test ready: true, restart count 0 May 4 16:10:30.219: INFO: Container delcm-volume-test ready: true, restart count 0 May 4 16:10:30.219: INFO: Container updcm-volume-test ready: true, restart count 0 May 4 16:10:30.219: INFO: var-expansion-11a0463e-02e0-4f42-95b7-041fc5123e73 started at 2021-05-04 16:09:35 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.219: INFO: Container dapi-container ready: false, restart count 0 May 4 16:10:30.219: INFO: labelsupdate88e9c68f-73d0-444b-947b-a387966c10b2 started at 2021-05-04 16:10:10 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.219: INFO: Container client-container ready: false, restart count 0 May 4 16:10:30.219: INFO: nginx-proxy-node2 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.219: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:10:30.219: INFO: affinity-clusterip-transition-l7b75 started at 2021-05-04 16:10:10 +0000 UTC (0+1 container statuses recorded) May 4 16:10:30.219: INFO: Container affinity-clusterip-transition ready: true, restart count 0 W0504 16:10:30.231389 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:10:30.269: INFO: Latency metrics for node node2 May 4 16:10:30.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1956" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • Failure [154.081 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:10:16.962: Unexpected error: <*errors.errorString | 0xc0073b6d10>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31884 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31884 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3444 ------------------------------ {"msg":"FAILED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":27,"skipped":328,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:10:19.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-cacf8a5c-bdcf-499e-8146-252bd03be118 STEP: Creating configMap with name cm-test-opt-upd-eb930570-bcfb-4082-98d6-ee0e18aafdca STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-cacf8a5c-bdcf-499e-8146-252bd03be118 STEP: Updating configmap cm-test-opt-upd-eb930570-bcfb-4082-98d6-ee0e18aafdca STEP: Creating configMap with name cm-test-opt-create-239ff747-2f2a-48cc-958e-d31e9bacdcc3 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:10:31.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6454" for this suite. • [SLOW TEST:12.259 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":170,"failed":0} S ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:10:32.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:10:32.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2743" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":10,"skipped":171,"failed":0} SS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:10:10.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-288 STEP: creating service affinity-clusterip-transition in namespace services-288 STEP: creating replication controller affinity-clusterip-transition in namespace services-288 I0504 16:10:10.089956 38 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-288, replica count: 3 I0504 16:10:13.140617 38 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0504 16:10:16.141094 38 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0504 16:10:19.141684 38 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 4 16:10:19.146: INFO: Creating new exec pod May 4 16:10:24.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-288 exec execpod-affinity4zc8r -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' May 4 16:10:24.432: INFO: stderr: "+ nc -zv -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" May 4 16:10:24.432: INFO: stdout: "" May 4 16:10:24.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-288 exec execpod-affinity4zc8r -- /bin/sh -x -c nc -zv -t -w 2 10.233.56.181 80' May 4 16:10:24.719: INFO: stderr: "+ nc -zv -t -w 2 10.233.56.181 80\nConnection to 10.233.56.181 80 port [tcp/http] succeeded!\n" May 4 16:10:24.719: INFO: stdout: "" May 4 16:10:24.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-288 exec execpod-affinity4zc8r -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.56.181:80/ ; done' May 4 16:10:25.033: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.181:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.181:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.181:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.181:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.181:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.181:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.181:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.181:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.181:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.181:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.181:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.181:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.181:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.181:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.181:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.181:80/\n" May 4 16:10:25.033: INFO: stdout: "\naffinity-clusterip-transition-595q4\naffinity-clusterip-transition-595q4\naffinity-clusterip-transition-l7b75\naffinity-clusterip-transition-bjq6n\naffinity-clusterip-transition-l7b75\naffinity-clusterip-transition-595q4\naffinity-clusterip-transition-bjq6n\naffinity-clusterip-transition-595q4\naffinity-clusterip-transition-bjq6n\naffinity-clusterip-transition-595q4\naffinity-clusterip-transition-595q4\naffinity-clusterip-transition-bjq6n\naffinity-clusterip-transition-bjq6n\naffinity-clusterip-transition-bjq6n\naffinity-clusterip-transition-595q4\naffinity-clusterip-transition-bjq6n" May 4 16:10:25.033: INFO: Received response from host: affinity-clusterip-transition-595q4 May 4 16:10:25.033: INFO: Received response from host: affinity-clusterip-transition-595q4 May 4 16:10:25.033: INFO: Received response from host: affinity-clusterip-transition-l7b75 May 4 16:10:25.033: INFO: Received response from host: affinity-clusterip-transition-bjq6n May 4 16:10:25.033: INFO: Received response from host: affinity-clusterip-transition-l7b75 May 4 16:10:25.033: INFO: Received response from host: affinity-clusterip-transition-595q4 May 4 16:10:25.033: INFO: Received response from host: affinity-clusterip-transition-bjq6n May 4 16:10:25.033: INFO: Received response from host: affinity-clusterip-transition-595q4 May 4 16:10:25.033: INFO: Received response from host: affinity-clusterip-transition-bjq6n May 4 16:10:25.033: INFO: Received response from host: affinity-clusterip-transition-595q4 May 4 16:10:25.033: INFO: Received response from host: affinity-clusterip-transition-595q4 May 4 16:10:25.033: INFO: Received response from host: affinity-clusterip-transition-bjq6n May 4 16:10:25.033: INFO: Received response from host: affinity-clusterip-transition-bjq6n May 4 16:10:25.033: INFO: Received response from host: affinity-clusterip-transition-bjq6n May 4 16:10:25.033: INFO: Received response from host: affinity-clusterip-transition-595q4 May 4 16:10:25.033: INFO: Received response from host: affinity-clusterip-transition-bjq6n May 4 16:10:25.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-288 exec execpod-affinity4zc8r -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.56.181:80/ ; done' May 4 16:10:25.450: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.181:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.181:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.181:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.181:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.181:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.181:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.181:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.181:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.181:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.181:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.181:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.181:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.181:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.181:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.181:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.181:80/\n" May 4 16:10:25.451: INFO: stdout: "\naffinity-clusterip-transition-l7b75\naffinity-clusterip-transition-l7b75\naffinity-clusterip-transition-l7b75\naffinity-clusterip-transition-l7b75\naffinity-clusterip-transition-l7b75\naffinity-clusterip-transition-l7b75\naffinity-clusterip-transition-l7b75\naffinity-clusterip-transition-l7b75\naffinity-clusterip-transition-l7b75\naffinity-clusterip-transition-l7b75\naffinity-clusterip-transition-l7b75\naffinity-clusterip-transition-l7b75\naffinity-clusterip-transition-l7b75\naffinity-clusterip-transition-l7b75\naffinity-clusterip-transition-l7b75\naffinity-clusterip-transition-l7b75" May 4 16:10:25.451: INFO: Received response from host: affinity-clusterip-transition-l7b75 May 4 16:10:25.451: INFO: Received response from host: affinity-clusterip-transition-l7b75 May 4 16:10:25.451: INFO: Received response from host: affinity-clusterip-transition-l7b75 May 4 16:10:25.451: INFO: Received response from host: affinity-clusterip-transition-l7b75 May 4 16:10:25.451: INFO: Received response from host: affinity-clusterip-transition-l7b75 May 4 16:10:25.451: INFO: Received response from host: affinity-clusterip-transition-l7b75 May 4 16:10:25.451: INFO: Received response from host: affinity-clusterip-transition-l7b75 May 4 16:10:25.451: INFO: Received response from host: affinity-clusterip-transition-l7b75 May 4 16:10:25.451: INFO: Received response from host: affinity-clusterip-transition-l7b75 May 4 16:10:25.451: INFO: Received response from host: affinity-clusterip-transition-l7b75 May 4 16:10:25.451: INFO: Received response from host: affinity-clusterip-transition-l7b75 May 4 16:10:25.451: INFO: Received response from host: affinity-clusterip-transition-l7b75 May 4 16:10:25.451: INFO: Received response from host: affinity-clusterip-transition-l7b75 May 4 16:10:25.451: INFO: Received response from host: affinity-clusterip-transition-l7b75 May 4 16:10:25.451: INFO: Received response from host: affinity-clusterip-transition-l7b75 May 4 16:10:25.451: INFO: Received response from host: affinity-clusterip-transition-l7b75 May 4 16:10:25.451: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-288, will wait for the garbage collector to delete the pods May 4 16:10:25.518: INFO: Deleting ReplicationController affinity-clusterip-transition took: 5.172425ms May 4 16:10:25.618: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.350434ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:10:40.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-288" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:29.991 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":21,"skipped":572,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:10:30.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:10:41.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2600" for this suite. • [SLOW TEST:11.058 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":28,"skipped":335,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:10:32.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:10:43.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9489" for this suite. • [SLOW TEST:11.058 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":11,"skipped":173,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:10:43.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:10:56.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9431" for this suite. • [SLOW TEST:13.098 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":12,"skipped":217,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:10:41.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 4 16:10:41.466: INFO: >>> kubeConfig: /root/.kube/config May 4 16:10:49.362: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:11:05.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6917" for this suite. • [SLOW TEST:24.562 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":29,"skipped":375,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:10:56.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-gddp9 in namespace proxy-9940 I0504 16:10:56.391443 21 runners.go:190] Created replication controller with name: proxy-service-gddp9, namespace: proxy-9940, replica count: 1 I0504 16:10:57.442194 21 runners.go:190] proxy-service-gddp9 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0504 16:10:58.442557 21 runners.go:190] proxy-service-gddp9 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0504 16:10:59.443124 21 runners.go:190] proxy-service-gddp9 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0504 16:11:00.443515 21 runners.go:190] proxy-service-gddp9 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0504 16:11:01.444022 21 runners.go:190] proxy-service-gddp9 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0504 16:11:02.444288 21 runners.go:190] proxy-service-gddp9 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0504 16:11:03.445179 21 runners.go:190] proxy-service-gddp9 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0504 16:11:04.445879 21 runners.go:190] proxy-service-gddp9 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 4 16:11:04.448: INFO: setup took 8.067926782s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 4 16:11:04.455: INFO: (0) /api/v1/namespaces/proxy-9940/services/proxy-service-gddp9:portname1/proxy/: foo (200; 6.269367ms) May 4 16:11:04.455: INFO: (0) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:160/proxy/: foo (200; 6.264888ms) May 4 16:11:04.455: INFO: (0) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:162/proxy/: bar (200; 6.447239ms) May 4 16:11:04.455: INFO: (0) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:1080/proxy/: test<... (200; 6.647262ms) May 4 16:11:04.455: INFO: (0) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg/proxy/: test (200; 6.526151ms) May 4 16:11:04.455: INFO: (0) /api/v1/namespaces/proxy-9940/services/http:proxy-service-gddp9:portname1/proxy/: foo (200; 6.609199ms) May 4 16:11:04.455: INFO: (0) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:160/proxy/: foo (200; 6.448284ms) May 4 16:11:04.455: INFO: (0) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:1080/proxy/: ... (200; 6.443889ms) May 4 16:11:04.455: INFO: (0) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:162/proxy/: bar (200; 6.514296ms) May 4 16:11:04.455: INFO: (0) /api/v1/namespaces/proxy-9940/services/http:proxy-service-gddp9:portname2/proxy/: bar (200; 6.59109ms) May 4 16:11:04.455: INFO: (0) /api/v1/namespaces/proxy-9940/services/proxy-service-gddp9:portname2/proxy/: bar (200; 6.443956ms) May 4 16:11:04.456: INFO: (0) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:460/proxy/: tls baz (200; 7.599616ms) May 4 16:11:04.456: INFO: (0) /api/v1/namespaces/proxy-9940/services/https:proxy-service-gddp9:tlsportname1/proxy/: tls baz (200; 7.333265ms) May 4 16:11:04.458: INFO: (0) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:462/proxy/: tls qux (200; 9.317266ms) May 4 16:11:04.458: INFO: (0) /api/v1/namespaces/proxy-9940/services/https:proxy-service-gddp9:tlsportname2/proxy/: tls qux (200; 9.365954ms) May 4 16:11:04.461: INFO: (0) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:443/proxy/: ... (200; 2.793075ms) May 4 16:11:04.464: INFO: (1) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:162/proxy/: bar (200; 2.758301ms) May 4 16:11:04.464: INFO: (1) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:160/proxy/: foo (200; 2.856122ms) May 4 16:11:04.464: INFO: (1) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg/proxy/: test (200; 3.119279ms) May 4 16:11:04.464: INFO: (1) /api/v1/namespaces/proxy-9940/services/https:proxy-service-gddp9:tlsportname2/proxy/: tls qux (200; 3.176167ms) May 4 16:11:04.464: INFO: (1) /api/v1/namespaces/proxy-9940/services/proxy-service-gddp9:portname1/proxy/: foo (200; 3.233645ms) May 4 16:11:04.464: INFO: (1) /api/v1/namespaces/proxy-9940/services/proxy-service-gddp9:portname2/proxy/: bar (200; 3.515565ms) May 4 16:11:04.464: INFO: (1) /api/v1/namespaces/proxy-9940/services/http:proxy-service-gddp9:portname2/proxy/: bar (200; 3.232696ms) May 4 16:11:04.464: INFO: (1) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:1080/proxy/: test<... (200; 3.236103ms) May 4 16:11:04.465: INFO: (1) /api/v1/namespaces/proxy-9940/services/http:proxy-service-gddp9:portname1/proxy/: foo (200; 3.327966ms) May 4 16:11:04.465: INFO: (1) /api/v1/namespaces/proxy-9940/services/https:proxy-service-gddp9:tlsportname1/proxy/: tls baz (200; 3.782016ms) May 4 16:11:04.467: INFO: (2) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:1080/proxy/: test<... (200; 2.067903ms) May 4 16:11:04.467: INFO: (2) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:162/proxy/: bar (200; 2.35147ms) May 4 16:11:04.467: INFO: (2) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:160/proxy/: foo (200; 2.491415ms) May 4 16:11:04.467: INFO: (2) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:443/proxy/: test (200; 2.983409ms) May 4 16:11:04.468: INFO: (2) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:1080/proxy/: ... (200; 3.022944ms) May 4 16:11:04.468: INFO: (2) /api/v1/namespaces/proxy-9940/services/http:proxy-service-gddp9:portname1/proxy/: foo (200; 3.532028ms) May 4 16:11:04.468: INFO: (2) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:162/proxy/: bar (200; 3.343418ms) May 4 16:11:04.468: INFO: (2) /api/v1/namespaces/proxy-9940/services/proxy-service-gddp9:portname2/proxy/: bar (200; 3.485665ms) May 4 16:11:04.469: INFO: (2) /api/v1/namespaces/proxy-9940/services/http:proxy-service-gddp9:portname2/proxy/: bar (200; 3.655712ms) May 4 16:11:04.469: INFO: (2) /api/v1/namespaces/proxy-9940/services/proxy-service-gddp9:portname1/proxy/: foo (200; 3.750411ms) May 4 16:11:04.469: INFO: (2) /api/v1/namespaces/proxy-9940/services/https:proxy-service-gddp9:tlsportname1/proxy/: tls baz (200; 3.718158ms) May 4 16:11:04.471: INFO: (3) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:1080/proxy/: test<... (200; 2.195376ms) May 4 16:11:04.471: INFO: (3) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:1080/proxy/: ... (200; 2.45371ms) May 4 16:11:04.471: INFO: (3) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:162/proxy/: bar (200; 2.473716ms) May 4 16:11:04.472: INFO: (3) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:160/proxy/: foo (200; 2.794401ms) May 4 16:11:04.472: INFO: (3) /api/v1/namespaces/proxy-9940/services/https:proxy-service-gddp9:tlsportname1/proxy/: tls baz (200; 2.717779ms) May 4 16:11:04.472: INFO: (3) /api/v1/namespaces/proxy-9940/services/proxy-service-gddp9:portname1/proxy/: foo (200; 3.075624ms) May 4 16:11:04.472: INFO: (3) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:462/proxy/: tls qux (200; 3.358757ms) May 4 16:11:04.472: INFO: (3) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:162/proxy/: bar (200; 3.287797ms) May 4 16:11:04.472: INFO: (3) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg/proxy/: test (200; 3.400897ms) May 4 16:11:04.472: INFO: (3) /api/v1/namespaces/proxy-9940/services/http:proxy-service-gddp9:portname2/proxy/: bar (200; 3.480901ms) May 4 16:11:04.472: INFO: (3) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:160/proxy/: foo (200; 3.409789ms) May 4 16:11:04.472: INFO: (3) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:460/proxy/: tls baz (200; 3.588722ms) May 4 16:11:04.472: INFO: (3) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:443/proxy/: test<... (200; 2.941559ms) May 4 16:11:04.476: INFO: (4) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg/proxy/: test (200; 2.933619ms) May 4 16:11:04.476: INFO: (4) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:1080/proxy/: ... (200; 3.033836ms) May 4 16:11:04.476: INFO: (4) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:160/proxy/: foo (200; 3.215381ms) May 4 16:11:04.477: INFO: (4) /api/v1/namespaces/proxy-9940/services/https:proxy-service-gddp9:tlsportname1/proxy/: tls baz (200; 3.341371ms) May 4 16:11:04.477: INFO: (4) /api/v1/namespaces/proxy-9940/services/proxy-service-gddp9:portname2/proxy/: bar (200; 3.373248ms) May 4 16:11:04.477: INFO: (4) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:460/proxy/: tls baz (200; 3.29875ms) May 4 16:11:04.477: INFO: (4) /api/v1/namespaces/proxy-9940/services/http:proxy-service-gddp9:portname1/proxy/: foo (200; 3.857193ms) May 4 16:11:04.477: INFO: (4) /api/v1/namespaces/proxy-9940/services/http:proxy-service-gddp9:portname2/proxy/: bar (200; 4.179546ms) May 4 16:11:04.477: INFO: (4) /api/v1/namespaces/proxy-9940/services/https:proxy-service-gddp9:tlsportname2/proxy/: tls qux (200; 4.082969ms) May 4 16:11:04.477: INFO: (4) /api/v1/namespaces/proxy-9940/services/proxy-service-gddp9:portname1/proxy/: foo (200; 4.146026ms) May 4 16:11:04.479: INFO: (5) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:460/proxy/: tls baz (200; 2.054378ms) May 4 16:11:04.480: INFO: (5) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:443/proxy/: test<... (200; 2.367524ms) May 4 16:11:04.480: INFO: (5) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:160/proxy/: foo (200; 2.955195ms) May 4 16:11:04.481: INFO: (5) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:160/proxy/: foo (200; 2.889968ms) May 4 16:11:04.481: INFO: (5) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:1080/proxy/: ... (200; 2.888232ms) May 4 16:11:04.481: INFO: (5) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:462/proxy/: tls qux (200; 3.121469ms) May 4 16:11:04.481: INFO: (5) /api/v1/namespaces/proxy-9940/services/proxy-service-gddp9:portname1/proxy/: foo (200; 3.279555ms) May 4 16:11:04.481: INFO: (5) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg/proxy/: test (200; 3.295067ms) May 4 16:11:04.481: INFO: (5) /api/v1/namespaces/proxy-9940/services/https:proxy-service-gddp9:tlsportname1/proxy/: tls baz (200; 3.441384ms) May 4 16:11:04.481: INFO: (5) /api/v1/namespaces/proxy-9940/services/http:proxy-service-gddp9:portname2/proxy/: bar (200; 3.379231ms) May 4 16:11:04.481: INFO: (5) /api/v1/namespaces/proxy-9940/services/proxy-service-gddp9:portname2/proxy/: bar (200; 3.920835ms) May 4 16:11:04.482: INFO: (5) /api/v1/namespaces/proxy-9940/services/https:proxy-service-gddp9:tlsportname2/proxy/: tls qux (200; 4.260267ms) May 4 16:11:04.482: INFO: (5) /api/v1/namespaces/proxy-9940/services/http:proxy-service-gddp9:portname1/proxy/: foo (200; 4.254333ms) May 4 16:11:04.484: INFO: (6) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:162/proxy/: bar (200; 2.052363ms) May 4 16:11:04.484: INFO: (6) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:1080/proxy/: ... (200; 2.361644ms) May 4 16:11:04.484: INFO: (6) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:460/proxy/: tls baz (200; 2.411299ms) May 4 16:11:04.485: INFO: (6) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:160/proxy/: foo (200; 2.490294ms) May 4 16:11:04.485: INFO: (6) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:443/proxy/: test (200; 2.888699ms) May 4 16:11:04.485: INFO: (6) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:1080/proxy/: test<... (200; 2.999123ms) May 4 16:11:04.485: INFO: (6) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:162/proxy/: bar (200; 2.96457ms) May 4 16:11:04.485: INFO: (6) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:160/proxy/: foo (200; 3.051144ms) May 4 16:11:04.485: INFO: (6) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:462/proxy/: tls qux (200; 3.102856ms) May 4 16:11:04.485: INFO: (6) /api/v1/namespaces/proxy-9940/services/proxy-service-gddp9:portname2/proxy/: bar (200; 3.561047ms) May 4 16:11:04.486: INFO: (6) /api/v1/namespaces/proxy-9940/services/https:proxy-service-gddp9:tlsportname2/proxy/: tls qux (200; 3.601331ms) May 4 16:11:04.486: INFO: (6) /api/v1/namespaces/proxy-9940/services/http:proxy-service-gddp9:portname2/proxy/: bar (200; 3.88569ms) May 4 16:11:04.486: INFO: (6) /api/v1/namespaces/proxy-9940/services/https:proxy-service-gddp9:tlsportname1/proxy/: tls baz (200; 4.181959ms) May 4 16:11:04.486: INFO: (6) /api/v1/namespaces/proxy-9940/services/proxy-service-gddp9:portname1/proxy/: foo (200; 4.158783ms) May 4 16:11:04.488: INFO: (7) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:1080/proxy/: test<... (200; 1.904391ms) May 4 16:11:04.489: INFO: (7) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:162/proxy/: bar (200; 2.070773ms) May 4 16:11:04.489: INFO: (7) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:160/proxy/: foo (200; 2.177759ms) May 4 16:11:04.489: INFO: (7) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg/proxy/: test (200; 2.505135ms) May 4 16:11:04.489: INFO: (7) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:160/proxy/: foo (200; 2.751412ms) May 4 16:11:04.489: INFO: (7) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:162/proxy/: bar (200; 2.680053ms) May 4 16:11:04.489: INFO: (7) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:443/proxy/: ... (200; 2.983975ms) May 4 16:11:04.490: INFO: (7) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:462/proxy/: tls qux (200; 3.227544ms) May 4 16:11:04.490: INFO: (7) /api/v1/namespaces/proxy-9940/services/http:proxy-service-gddp9:portname1/proxy/: foo (200; 3.2012ms) May 4 16:11:04.490: INFO: (7) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:460/proxy/: tls baz (200; 3.355739ms) May 4 16:11:04.490: INFO: (7) /api/v1/namespaces/proxy-9940/services/https:proxy-service-gddp9:tlsportname2/proxy/: tls qux (200; 3.565381ms) May 4 16:11:04.490: INFO: (7) /api/v1/namespaces/proxy-9940/services/https:proxy-service-gddp9:tlsportname1/proxy/: tls baz (200; 3.660579ms) May 4 16:11:04.490: INFO: (7) /api/v1/namespaces/proxy-9940/services/proxy-service-gddp9:portname2/proxy/: bar (200; 4.054744ms) May 4 16:11:04.490: INFO: (7) /api/v1/namespaces/proxy-9940/services/proxy-service-gddp9:portname1/proxy/: foo (200; 3.951035ms) May 4 16:11:04.492: INFO: (8) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:160/proxy/: foo (200; 1.904546ms) May 4 16:11:04.492: INFO: (8) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:162/proxy/: bar (200; 2.078745ms) May 4 16:11:04.493: INFO: (8) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:443/proxy/: ... (200; 2.408694ms) May 4 16:11:04.493: INFO: (8) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:460/proxy/: tls baz (200; 2.392988ms) May 4 16:11:04.493: INFO: (8) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:462/proxy/: tls qux (200; 2.42528ms) May 4 16:11:04.493: INFO: (8) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:1080/proxy/: test<... (200; 2.451301ms) May 4 16:11:04.493: INFO: (8) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:160/proxy/: foo (200; 2.880756ms) May 4 16:11:04.494: INFO: (8) /api/v1/namespaces/proxy-9940/services/http:proxy-service-gddp9:portname1/proxy/: foo (200; 3.228292ms) May 4 16:11:04.494: INFO: (8) /api/v1/namespaces/proxy-9940/services/http:proxy-service-gddp9:portname2/proxy/: bar (200; 3.503566ms) May 4 16:11:04.494: INFO: (8) /api/v1/namespaces/proxy-9940/services/proxy-service-gddp9:portname2/proxy/: bar (200; 3.719092ms) May 4 16:11:04.494: INFO: (8) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg/proxy/: test (200; 3.564129ms) May 4 16:11:04.494: INFO: (8) /api/v1/namespaces/proxy-9940/services/proxy-service-gddp9:portname1/proxy/: foo (200; 3.752519ms) May 4 16:11:04.494: INFO: (8) /api/v1/namespaces/proxy-9940/services/https:proxy-service-gddp9:tlsportname2/proxy/: tls qux (200; 3.763689ms) May 4 16:11:04.495: INFO: (8) /api/v1/namespaces/proxy-9940/services/https:proxy-service-gddp9:tlsportname1/proxy/: tls baz (200; 4.10321ms) May 4 16:11:04.497: INFO: (9) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:162/proxy/: bar (200; 2.108966ms) May 4 16:11:04.497: INFO: (9) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:1080/proxy/: test<... (200; 2.092225ms) May 4 16:11:04.497: INFO: (9) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:460/proxy/: tls baz (200; 2.230029ms) May 4 16:11:04.497: INFO: (9) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:162/proxy/: bar (200; 2.59659ms) May 4 16:11:04.497: INFO: (9) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:443/proxy/: ... (200; 2.433342ms) May 4 16:11:04.498: INFO: (9) /api/v1/namespaces/proxy-9940/services/proxy-service-gddp9:portname2/proxy/: bar (200; 2.874613ms) May 4 16:11:04.498: INFO: (9) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:160/proxy/: foo (200; 2.93988ms) May 4 16:11:04.498: INFO: (9) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg/proxy/: test (200; 2.864445ms) May 4 16:11:04.498: INFO: (9) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:462/proxy/: tls qux (200; 2.89588ms) May 4 16:11:04.498: INFO: (9) /api/v1/namespaces/proxy-9940/services/http:proxy-service-gddp9:portname1/proxy/: foo (200; 3.073448ms) May 4 16:11:04.498: INFO: (9) /api/v1/namespaces/proxy-9940/services/http:proxy-service-gddp9:portname2/proxy/: bar (200; 3.39563ms) May 4 16:11:04.498: INFO: (9) /api/v1/namespaces/proxy-9940/services/https:proxy-service-gddp9:tlsportname2/proxy/: tls qux (200; 3.497983ms) May 4 16:11:04.498: INFO: (9) /api/v1/namespaces/proxy-9940/services/https:proxy-service-gddp9:tlsportname1/proxy/: tls baz (200; 3.790346ms) May 4 16:11:04.499: INFO: (9) /api/v1/namespaces/proxy-9940/services/proxy-service-gddp9:portname1/proxy/: foo (200; 3.985837ms) May 4 16:11:04.501: INFO: (10) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:1080/proxy/: test<... (200; 1.764741ms) May 4 16:11:04.501: INFO: (10) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:460/proxy/: tls baz (200; 2.175669ms) May 4 16:11:04.501: INFO: (10) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:1080/proxy/: ... (200; 2.33697ms) May 4 16:11:04.502: INFO: (10) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg/proxy/: test (200; 2.75547ms) May 4 16:11:04.502: INFO: (10) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:162/proxy/: bar (200; 2.885769ms) May 4 16:11:04.502: INFO: (10) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:443/proxy/: test (200; 2.488192ms) May 4 16:11:04.506: INFO: (11) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:160/proxy/: foo (200; 2.540699ms) May 4 16:11:04.506: INFO: (11) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:462/proxy/: tls qux (200; 2.694743ms) May 4 16:11:04.507: INFO: (11) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:1080/proxy/: ... (200; 2.824879ms) May 4 16:11:04.507: INFO: (11) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:1080/proxy/: test<... (200; 2.878924ms) May 4 16:11:04.507: INFO: (11) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:460/proxy/: tls baz (200; 3.086847ms) May 4 16:11:04.507: INFO: (11) /api/v1/namespaces/proxy-9940/services/proxy-service-gddp9:portname1/proxy/: foo (200; 3.180165ms) May 4 16:11:04.507: INFO: (11) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:162/proxy/: bar (200; 3.565196ms) May 4 16:11:04.507: INFO: (11) /api/v1/namespaces/proxy-9940/services/proxy-service-gddp9:portname2/proxy/: bar (200; 3.468107ms) May 4 16:11:04.507: INFO: (11) /api/v1/namespaces/proxy-9940/services/http:proxy-service-gddp9:portname1/proxy/: foo (200; 3.656632ms) May 4 16:11:04.507: INFO: (11) /api/v1/namespaces/proxy-9940/services/https:proxy-service-gddp9:tlsportname2/proxy/: tls qux (200; 3.567662ms) May 4 16:11:04.507: INFO: (11) /api/v1/namespaces/proxy-9940/services/https:proxy-service-gddp9:tlsportname1/proxy/: tls baz (200; 3.774753ms) May 4 16:11:04.508: INFO: (11) /api/v1/namespaces/proxy-9940/services/http:proxy-service-gddp9:portname2/proxy/: bar (200; 4.096623ms) May 4 16:11:04.510: INFO: (12) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:460/proxy/: tls baz (200; 2.104496ms) May 4 16:11:04.510: INFO: (12) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:1080/proxy/: ... (200; 2.29173ms) May 4 16:11:04.510: INFO: (12) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:160/proxy/: foo (200; 2.284216ms) May 4 16:11:04.511: INFO: (12) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:160/proxy/: foo (200; 2.688371ms) May 4 16:11:04.511: INFO: (12) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:1080/proxy/: test<... (200; 2.709075ms) May 4 16:11:04.511: INFO: (12) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:162/proxy/: bar (200; 2.688738ms) May 4 16:11:04.511: INFO: (12) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:162/proxy/: bar (200; 2.927331ms) May 4 16:11:04.511: INFO: (12) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:462/proxy/: tls qux (200; 2.969153ms) May 4 16:11:04.511: INFO: (12) /api/v1/namespaces/proxy-9940/services/https:proxy-service-gddp9:tlsportname2/proxy/: tls qux (200; 3.285697ms) May 4 16:11:04.511: INFO: (12) /api/v1/namespaces/proxy-9940/services/proxy-service-gddp9:portname1/proxy/: foo (200; 3.365946ms) May 4 16:11:04.511: INFO: (12) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg/proxy/: test (200; 3.47706ms) May 4 16:11:04.511: INFO: (12) /api/v1/namespaces/proxy-9940/services/http:proxy-service-gddp9:portname1/proxy/: foo (200; 3.521578ms) May 4 16:11:04.511: INFO: (12) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:443/proxy/: test<... (200; 2.489596ms) May 4 16:11:04.515: INFO: (13) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:460/proxy/: tls baz (200; 2.444381ms) May 4 16:11:04.515: INFO: (13) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg/proxy/: test (200; 2.532899ms) May 4 16:11:04.515: INFO: (13) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:443/proxy/: ... (200; 3.170726ms) May 4 16:11:04.515: INFO: (13) /api/v1/namespaces/proxy-9940/services/http:proxy-service-gddp9:portname2/proxy/: bar (200; 3.339952ms) May 4 16:11:04.515: INFO: (13) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:160/proxy/: foo (200; 3.186316ms) May 4 16:11:04.515: INFO: (13) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:462/proxy/: tls qux (200; 3.333823ms) May 4 16:11:04.515: INFO: (13) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:162/proxy/: bar (200; 3.312513ms) May 4 16:11:04.516: INFO: (13) /api/v1/namespaces/proxy-9940/services/http:proxy-service-gddp9:portname1/proxy/: foo (200; 3.54851ms) May 4 16:11:04.516: INFO: (13) /api/v1/namespaces/proxy-9940/services/proxy-service-gddp9:portname2/proxy/: bar (200; 3.685073ms) May 4 16:11:04.516: INFO: (13) /api/v1/namespaces/proxy-9940/services/https:proxy-service-gddp9:tlsportname1/proxy/: tls baz (200; 3.907425ms) May 4 16:11:04.518: INFO: (14) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:162/proxy/: bar (200; 1.879881ms) May 4 16:11:04.519: INFO: (14) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:460/proxy/: tls baz (200; 2.400522ms) May 4 16:11:04.519: INFO: (14) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:162/proxy/: bar (200; 2.433661ms) May 4 16:11:04.519: INFO: (14) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:462/proxy/: tls qux (200; 2.50235ms) May 4 16:11:04.519: INFO: (14) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:1080/proxy/: test<... (200; 2.880385ms) May 4 16:11:04.519: INFO: (14) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:443/proxy/: ... (200; 3.136806ms) May 4 16:11:04.519: INFO: (14) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg/proxy/: test (200; 3.154426ms) May 4 16:11:04.520: INFO: (14) /api/v1/namespaces/proxy-9940/services/http:proxy-service-gddp9:portname2/proxy/: bar (200; 3.424811ms) May 4 16:11:04.520: INFO: (14) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:160/proxy/: foo (200; 3.538028ms) May 4 16:11:04.520: INFO: (14) /api/v1/namespaces/proxy-9940/services/proxy-service-gddp9:portname2/proxy/: bar (200; 3.620948ms) May 4 16:11:04.520: INFO: (14) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:160/proxy/: foo (200; 3.511056ms) May 4 16:11:04.520: INFO: (14) /api/v1/namespaces/proxy-9940/services/proxy-service-gddp9:portname1/proxy/: foo (200; 3.80807ms) May 4 16:11:04.520: INFO: (14) /api/v1/namespaces/proxy-9940/services/https:proxy-service-gddp9:tlsportname1/proxy/: tls baz (200; 4.044199ms) May 4 16:11:04.520: INFO: (14) /api/v1/namespaces/proxy-9940/services/http:proxy-service-gddp9:portname1/proxy/: foo (200; 3.995634ms) May 4 16:11:04.522: INFO: (15) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:1080/proxy/: test<... (200; 2.091235ms) May 4 16:11:04.522: INFO: (15) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:160/proxy/: foo (200; 1.938827ms) May 4 16:11:04.523: INFO: (15) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:460/proxy/: tls baz (200; 2.252568ms) May 4 16:11:04.523: INFO: (15) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:462/proxy/: tls qux (200; 2.45179ms) May 4 16:11:04.523: INFO: (15) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:162/proxy/: bar (200; 2.55265ms) May 4 16:11:04.523: INFO: (15) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg/proxy/: test (200; 2.554636ms) May 4 16:11:04.523: INFO: (15) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:162/proxy/: bar (200; 2.663326ms) May 4 16:11:04.523: INFO: (15) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:1080/proxy/: ... (200; 3.004946ms) May 4 16:11:04.523: INFO: (15) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:443/proxy/: test (200; 2.000427ms) May 4 16:11:04.527: INFO: (16) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:1080/proxy/: ... (200; 2.340984ms) May 4 16:11:04.527: INFO: (16) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:162/proxy/: bar (200; 2.501283ms) May 4 16:11:04.527: INFO: (16) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:160/proxy/: foo (200; 2.531265ms) May 4 16:11:04.527: INFO: (16) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:462/proxy/: tls qux (200; 2.785166ms) May 4 16:11:04.528: INFO: (16) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:160/proxy/: foo (200; 2.853766ms) May 4 16:11:04.528: INFO: (16) /api/v1/namespaces/proxy-9940/services/http:proxy-service-gddp9:portname2/proxy/: bar (200; 2.86205ms) May 4 16:11:04.528: INFO: (16) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:1080/proxy/: test<... (200; 3.094258ms) May 4 16:11:04.528: INFO: (16) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:460/proxy/: tls baz (200; 3.124232ms) May 4 16:11:04.528: INFO: (16) /api/v1/namespaces/proxy-9940/services/proxy-service-gddp9:portname2/proxy/: bar (200; 3.549845ms) May 4 16:11:04.528: INFO: (16) /api/v1/namespaces/proxy-9940/services/http:proxy-service-gddp9:portname1/proxy/: foo (200; 3.528002ms) May 4 16:11:04.528: INFO: (16) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:162/proxy/: bar (200; 3.79286ms) May 4 16:11:04.528: INFO: (16) /api/v1/namespaces/proxy-9940/services/proxy-service-gddp9:portname1/proxy/: foo (200; 3.687593ms) May 4 16:11:04.529: INFO: (16) /api/v1/namespaces/proxy-9940/services/https:proxy-service-gddp9:tlsportname1/proxy/: tls baz (200; 4.00036ms) May 4 16:11:04.529: INFO: (16) /api/v1/namespaces/proxy-9940/services/https:proxy-service-gddp9:tlsportname2/proxy/: tls qux (200; 3.996038ms) May 4 16:11:04.531: INFO: (17) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:162/proxy/: bar (200; 1.876638ms) May 4 16:11:04.531: INFO: (17) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:462/proxy/: tls qux (200; 2.000944ms) May 4 16:11:04.531: INFO: (17) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:160/proxy/: foo (200; 2.059001ms) May 4 16:11:04.531: INFO: (17) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:443/proxy/: test (200; 2.478139ms) May 4 16:11:04.531: INFO: (17) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:460/proxy/: tls baz (200; 2.440467ms) May 4 16:11:04.532: INFO: (17) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:160/proxy/: foo (200; 2.603693ms) May 4 16:11:04.532: INFO: (17) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:162/proxy/: bar (200; 2.712356ms) May 4 16:11:04.532: INFO: (17) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:1080/proxy/: ... (200; 2.806765ms) May 4 16:11:04.532: INFO: (17) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:1080/proxy/: test<... (200; 2.881021ms) May 4 16:11:04.532: INFO: (17) /api/v1/namespaces/proxy-9940/services/https:proxy-service-gddp9:tlsportname2/proxy/: tls qux (200; 3.072099ms) May 4 16:11:04.532: INFO: (17) /api/v1/namespaces/proxy-9940/services/https:proxy-service-gddp9:tlsportname1/proxy/: tls baz (200; 3.420325ms) May 4 16:11:04.533: INFO: (17) /api/v1/namespaces/proxy-9940/services/proxy-service-gddp9:portname1/proxy/: foo (200; 3.693608ms) May 4 16:11:04.533: INFO: (17) /api/v1/namespaces/proxy-9940/services/http:proxy-service-gddp9:portname2/proxy/: bar (200; 3.651151ms) May 4 16:11:04.533: INFO: (17) /api/v1/namespaces/proxy-9940/services/http:proxy-service-gddp9:portname1/proxy/: foo (200; 4.126862ms) May 4 16:11:04.533: INFO: (17) /api/v1/namespaces/proxy-9940/services/proxy-service-gddp9:portname2/proxy/: bar (200; 4.095139ms) May 4 16:11:04.535: INFO: (18) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:1080/proxy/: test<... (200; 1.763618ms) May 4 16:11:04.536: INFO: (18) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:462/proxy/: tls qux (200; 2.468603ms) May 4 16:11:04.536: INFO: (18) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:460/proxy/: tls baz (200; 2.561777ms) May 4 16:11:04.536: INFO: (18) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:160/proxy/: foo (200; 2.685769ms) May 4 16:11:04.536: INFO: (18) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:1080/proxy/: ... (200; 2.688191ms) May 4 16:11:04.536: INFO: (18) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:162/proxy/: bar (200; 2.872786ms) May 4 16:11:04.536: INFO: (18) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg/proxy/: test (200; 3.071431ms) May 4 16:11:04.536: INFO: (18) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:160/proxy/: foo (200; 3.206646ms) May 4 16:11:04.537: INFO: (18) /api/v1/namespaces/proxy-9940/services/proxy-service-gddp9:portname2/proxy/: bar (200; 3.415785ms) May 4 16:11:04.537: INFO: (18) /api/v1/namespaces/proxy-9940/services/proxy-service-gddp9:portname1/proxy/: foo (200; 3.295257ms) May 4 16:11:04.537: INFO: (18) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:443/proxy/: test<... (200; 2.257872ms) May 4 16:11:04.540: INFO: (19) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:162/proxy/: bar (200; 2.947426ms) May 4 16:11:04.540: INFO: (19) /api/v1/namespaces/proxy-9940/services/proxy-service-gddp9:portname2/proxy/: bar (200; 3.087801ms) May 4 16:11:04.540: INFO: (19) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:1080/proxy/: ... (200; 2.98428ms) May 4 16:11:04.541: INFO: (19) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg/proxy/: test (200; 3.163639ms) May 4 16:11:04.541: INFO: (19) /api/v1/namespaces/proxy-9940/pods/proxy-service-gddp9-vhjvg:160/proxy/: foo (200; 3.106917ms) May 4 16:11:04.541: INFO: (19) /api/v1/namespaces/proxy-9940/pods/http:proxy-service-gddp9-vhjvg:160/proxy/: foo (200; 3.289099ms) May 4 16:11:04.541: INFO: (19) /api/v1/namespaces/proxy-9940/services/proxy-service-gddp9:portname1/proxy/: foo (200; 3.415877ms) May 4 16:11:04.541: INFO: (19) /api/v1/namespaces/proxy-9940/services/https:proxy-service-gddp9:tlsportname1/proxy/: tls baz (200; 3.503587ms) May 4 16:11:04.541: INFO: (19) /api/v1/namespaces/proxy-9940/pods/https:proxy-service-gddp9-vhjvg:460/proxy/: tls baz (200; 3.695322ms) May 4 16:11:04.541: INFO: (19) /api/v1/namespaces/proxy-9940/services/http:proxy-service-gddp9:portname2/proxy/: bar (200; 4.01252ms) May 4 16:11:04.542: INFO: (19) /api/v1/namespaces/proxy-9940/services/http:proxy-service-gddp9:portname1/proxy/: foo (200; 4.236548ms) May 4 16:11:04.542: INFO: (19) /api/v1/namespaces/proxy-9940/services/https:proxy-service-gddp9:tlsportname2/proxy/: tls qux (200; 4.45963ms) STEP: deleting ReplicationController proxy-service-gddp9 in namespace proxy-9940, will wait for the garbage collector to delete the pods May 4 16:11:04.600: INFO: Deleting ReplicationController proxy-service-gddp9 took: 5.745028ms May 4 16:11:04.701: INFO: Terminating ReplicationController proxy-service-gddp9 pods took: 100.401268ms [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:11:09.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-9940" for this suite. • [SLOW TEST:13.551 seconds] [sig-network] Proxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":-1,"completed":13,"skipped":236,"failed":0} [BeforeEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:11:09.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Request ServerVersion STEP: Confirm major version May 4 16:11:09.934: INFO: Major version: 1 STEP: Confirm minor version May 4 16:11:09.934: INFO: cleanMinorVersion: 19 May 4 16:11:09.934: INFO: Minor version: 19 [AfterEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:11:09.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-6495" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":14,"skipped":236,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:11:09.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:11:15.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8062" for this suite. • [SLOW TEST:5.807 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":15,"skipped":238,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:11:15.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:11:15.855: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 4 16:11:16.875: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:11:16.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6326" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":16,"skipped":276,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:11:16.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pod templates May 4 16:11:16.927: INFO: created test-podtemplate-1 May 4 16:11:16.930: INFO: created test-podtemplate-2 May 4 16:11:16.935: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates May 4 16:11:16.939: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity May 4 16:11:16.962: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:11:16.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-2159" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":17,"skipped":284,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:08:57.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-4716 STEP: creating service affinity-nodeport in namespace services-4716 STEP: creating replication controller affinity-nodeport in namespace services-4716 I0504 16:08:57.916117 24 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-4716, replica count: 3 I0504 16:09:00.966694 24 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0504 16:09:03.967056 24 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 4 16:09:03.977: INFO: Creating new exec pod May 4 16:09:09.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' May 4 16:09:09.262: INFO: stderr: "+ nc -zv -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" May 4 16:09:09.262: INFO: stdout: "" May 4 16:09:09.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.233.11.2 80' May 4 16:09:09.522: INFO: stderr: "+ nc -zv -t -w 2 10.233.11.2 80\nConnection to 10.233.11.2 80 port [tcp/http] succeeded!\n" May 4 16:09:09.522: INFO: stdout: "" May 4 16:09:09.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:09.776: INFO: rc: 1 May 4 16:09:09.776: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:10.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:11.038: INFO: rc: 1 May 4 16:09:11.038: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:11.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:12.032: INFO: rc: 1 May 4 16:09:12.032: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:12.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:13.119: INFO: rc: 1 May 4 16:09:13.119: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:13.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:14.024: INFO: rc: 1 May 4 16:09:14.024: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:14.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:15.195: INFO: rc: 1 May 4 16:09:15.195: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:15.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:16.221: INFO: rc: 1 May 4 16:09:16.221: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:16.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:17.020: INFO: rc: 1 May 4 16:09:17.021: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:17.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:18.042: INFO: rc: 1 May 4 16:09:18.042: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:18.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:19.050: INFO: rc: 1 May 4 16:09:19.051: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:19.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:20.037: INFO: rc: 1 May 4 16:09:20.037: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:20.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:21.081: INFO: rc: 1 May 4 16:09:21.081: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:21.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:22.038: INFO: rc: 1 May 4 16:09:22.038: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:22.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:23.017: INFO: rc: 1 May 4 16:09:23.017: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:23.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:24.035: INFO: rc: 1 May 4 16:09:24.035: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:24.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:25.030: INFO: rc: 1 May 4 16:09:25.030: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:25.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:26.035: INFO: rc: 1 May 4 16:09:26.035: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:26.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:27.126: INFO: rc: 1 May 4 16:09:27.126: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:27.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:28.184: INFO: rc: 1 May 4 16:09:28.184: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:28.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:29.063: INFO: rc: 1 May 4 16:09:29.063: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:29.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:30.064: INFO: rc: 1 May 4 16:09:30.064: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:30.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:31.052: INFO: rc: 1 May 4 16:09:31.052: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:31.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:32.104: INFO: rc: 1 May 4 16:09:32.104: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:32.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:33.062: INFO: rc: 1 May 4 16:09:33.062: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:33.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:34.152: INFO: rc: 1 May 4 16:09:34.152: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:34.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:35.049: INFO: rc: 1 May 4 16:09:35.049: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:35.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:36.047: INFO: rc: 1 May 4 16:09:36.047: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:36.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:37.263: INFO: rc: 1 May 4 16:09:37.263: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:37.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:38.040: INFO: rc: 1 May 4 16:09:38.040: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:38.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:39.062: INFO: rc: 1 May 4 16:09:39.062: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:39.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:40.045: INFO: rc: 1 May 4 16:09:40.045: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:40.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:41.021: INFO: rc: 1 May 4 16:09:41.021: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:41.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:42.032: INFO: rc: 1 May 4 16:09:42.032: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:42.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:43.030: INFO: rc: 1 May 4 16:09:43.030: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:43.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:44.059: INFO: rc: 1 May 4 16:09:44.059: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:44.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:45.043: INFO: rc: 1 May 4 16:09:45.043: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:45.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:46.052: INFO: rc: 1 May 4 16:09:46.052: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:46.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:47.024: INFO: rc: 1 May 4 16:09:47.024: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:47.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:48.018: INFO: rc: 1 May 4 16:09:48.018: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:48.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:49.037: INFO: rc: 1 May 4 16:09:49.037: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:49.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:50.032: INFO: rc: 1 May 4 16:09:50.033: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:50.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:51.272: INFO: rc: 1 May 4 16:09:51.272: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:51.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:52.024: INFO: rc: 1 May 4 16:09:52.025: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:52.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:53.107: INFO: rc: 1 May 4 16:09:53.107: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:53.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:54.042: INFO: rc: 1 May 4 16:09:54.042: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:54.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:55.094: INFO: rc: 1 May 4 16:09:55.094: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:55.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:56.196: INFO: rc: 1 May 4 16:09:56.196: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:56.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:58.005: INFO: rc: 1 May 4 16:09:58.005: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:58.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:09:59.063: INFO: rc: 1 May 4 16:09:59.063: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:09:59.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:00.072: INFO: rc: 1 May 4 16:10:00.072: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:00.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:01.133: INFO: rc: 1 May 4 16:10:01.133: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:01.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:02.213: INFO: rc: 1 May 4 16:10:02.213: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:02.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:03.285: INFO: rc: 1 May 4 16:10:03.285: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:03.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:04.199: INFO: rc: 1 May 4 16:10:04.199: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:04.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:05.132: INFO: rc: 1 May 4 16:10:05.132: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:05.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:06.299: INFO: rc: 1 May 4 16:10:06.299: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:06.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:07.179: INFO: rc: 1 May 4 16:10:07.179: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:07.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:08.186: INFO: rc: 1 May 4 16:10:08.186: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:08.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:09.042: INFO: rc: 1 May 4 16:10:09.042: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:09.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:10.297: INFO: rc: 1 May 4 16:10:10.297: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:10.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:11.073: INFO: rc: 1 May 4 16:10:11.074: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:11.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:12.643: INFO: rc: 1 May 4 16:10:12.643: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:12.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:13.531: INFO: rc: 1 May 4 16:10:13.531: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:13.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:14.603: INFO: rc: 1 May 4 16:10:14.603: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:14.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:15.050: INFO: rc: 1 May 4 16:10:15.051: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:15.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:16.084: INFO: rc: 1 May 4 16:10:16.084: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:16.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:17.067: INFO: rc: 1 May 4 16:10:17.068: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:17.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:18.043: INFO: rc: 1 May 4 16:10:18.043: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:18.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:19.025: INFO: rc: 1 May 4 16:10:19.025: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:19.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:20.107: INFO: rc: 1 May 4 16:10:20.107: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:20.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:21.380: INFO: rc: 1 May 4 16:10:21.380: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:21.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:22.392: INFO: rc: 1 May 4 16:10:22.392: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:22.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:23.101: INFO: rc: 1 May 4 16:10:23.101: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:23.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:24.040: INFO: rc: 1 May 4 16:10:24.040: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:24.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:25.038: INFO: rc: 1 May 4 16:10:25.038: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:25.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:26.063: INFO: rc: 1 May 4 16:10:26.063: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:26.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:28.128: INFO: rc: 1 May 4 16:10:28.128: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:28.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:29.064: INFO: rc: 1 May 4 16:10:29.064: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:29.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:30.053: INFO: rc: 1 May 4 16:10:30.053: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:30.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:31.097: INFO: rc: 1 May 4 16:10:31.097: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:31.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:32.086: INFO: rc: 1 May 4 16:10:32.086: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:32.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:33.018: INFO: rc: 1 May 4 16:10:33.018: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:33.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:34.039: INFO: rc: 1 May 4 16:10:34.039: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:34.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:35.025: INFO: rc: 1 May 4 16:10:35.025: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:35.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:36.026: INFO: rc: 1 May 4 16:10:36.026: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:36.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:37.059: INFO: rc: 1 May 4 16:10:37.059: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:37.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:38.138: INFO: rc: 1 May 4 16:10:38.138: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:38.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:39.026: INFO: rc: 1 May 4 16:10:39.026: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:39.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:40.047: INFO: rc: 1 May 4 16:10:40.047: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:40.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:41.026: INFO: rc: 1 May 4 16:10:41.026: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:41.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:42.054: INFO: rc: 1 May 4 16:10:42.054: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:42.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:43.051: INFO: rc: 1 May 4 16:10:43.051: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:43.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:44.037: INFO: rc: 1 May 4 16:10:44.037: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:44.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:45.053: INFO: rc: 1 May 4 16:10:45.053: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:45.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:46.040: INFO: rc: 1 May 4 16:10:46.040: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:46.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:47.010: INFO: rc: 1 May 4 16:10:47.010: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:47.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:48.022: INFO: rc: 1 May 4 16:10:48.022: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:48.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:49.044: INFO: rc: 1 May 4 16:10:49.044: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:49.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:50.055: INFO: rc: 1 May 4 16:10:50.055: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:50.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:51.055: INFO: rc: 1 May 4 16:10:51.055: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:51.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:52.046: INFO: rc: 1 May 4 16:10:52.046: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:52.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:53.033: INFO: rc: 1 May 4 16:10:53.033: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:53.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:54.044: INFO: rc: 1 May 4 16:10:54.044: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:54.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:55.014: INFO: rc: 1 May 4 16:10:55.014: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:55.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:56.021: INFO: rc: 1 May 4 16:10:56.021: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:56.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:57.891: INFO: rc: 1 May 4 16:10:57.891: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:58.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:10:59.055: INFO: rc: 1 May 4 16:10:59.055: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:10:59.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:11:00.041: INFO: rc: 1 May 4 16:11:00.041: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:11:00.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:11:01.054: INFO: rc: 1 May 4 16:11:01.054: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:11:01.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:11:02.026: INFO: rc: 1 May 4 16:11:02.026: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:11:02.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:11:03.049: INFO: rc: 1 May 4 16:11:03.049: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:11:03.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:11:04.037: INFO: rc: 1 May 4 16:11:04.038: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:11:04.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:11:05.041: INFO: rc: 1 May 4 16:11:05.041: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:11:05.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:11:06.056: INFO: rc: 1 May 4 16:11:06.056: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:11:06.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:11:07.049: INFO: rc: 1 May 4 16:11:07.049: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:11:07.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:11:08.042: INFO: rc: 1 May 4 16:11:08.042: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:11:08.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:11:09.030: INFO: rc: 1 May 4 16:11:09.030: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:11:09.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:11:10.153: INFO: rc: 1 May 4 16:11:10.153: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:11:10.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011' May 4 16:11:10.822: INFO: rc: 1 May 4 16:11:10.822: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4716 exec execpod-affinityl8j2v -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 31011: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 31011 nc: connect to 10.10.190.207 port 31011 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:11:10.823: FAIL: Unexpected error: <*errors.errorString | 0xc0031c8190>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31011 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31011 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc0012c2000, 0x54075e0, 0xc002c81ce0, 0xc001082480, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3511 +0x62e k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBService(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3470 k8s.io/kubernetes/test/e2e/network.glob..func24.28() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2508 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002d4d800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc002d4d800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc002d4d800, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 May 4 16:11:10.824: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-4716, will wait for the garbage collector to delete the pods May 4 16:11:10.888: INFO: Deleting ReplicationController affinity-nodeport took: 5.32273ms May 4 16:11:10.989: INFO: Terminating ReplicationController affinity-nodeport pods took: 100.519376ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "services-4716". STEP: Found 31 events. May 4 16:11:20.009: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-q859k: { } Scheduled: Successfully assigned services-4716/affinity-nodeport-q859k to node1 May 4 16:11:20.009: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-tmr9l: { } Scheduled: Successfully assigned services-4716/affinity-nodeport-tmr9l to node2 May 4 16:11:20.009: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-vjvq8: { } Scheduled: Successfully assigned services-4716/affinity-nodeport-vjvq8 to node2 May 4 16:11:20.009: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod-affinityl8j2v: { } Scheduled: Successfully assigned services-4716/execpod-affinityl8j2v to node2 May 4 16:11:20.009: INFO: At 2021-05-04 16:08:57 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-tmr9l May 4 16:11:20.009: INFO: At 2021-05-04 16:08:57 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-q859k May 4 16:11:20.009: INFO: At 2021-05-04 16:08:57 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-vjvq8 May 4 16:11:20.009: INFO: At 2021-05-04 16:08:59 +0000 UTC - event for affinity-nodeport-q859k: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.20" May 4 16:11:20.009: INFO: At 2021-05-04 16:08:59 +0000 UTC - event for affinity-nodeport-q859k: {multus } AddedInterface: Add eth0 [10.244.4.124/24] May 4 16:11:20.009: INFO: At 2021-05-04 16:08:59 +0000 UTC - event for affinity-nodeport-vjvq8: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.20" in 459.298636ms May 4 16:11:20.009: INFO: At 2021-05-04 16:08:59 +0000 UTC - event for affinity-nodeport-vjvq8: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.20" May 4 16:11:20.009: INFO: At 2021-05-04 16:08:59 +0000 UTC - event for affinity-nodeport-vjvq8: {multus } AddedInterface: Add eth0 [10.244.3.161/24] May 4 16:11:20.009: INFO: At 2021-05-04 16:09:00 +0000 UTC - event for affinity-nodeport-q859k: {kubelet node1} Started: Started container affinity-nodeport May 4 16:11:20.009: INFO: At 2021-05-04 16:09:00 +0000 UTC - event for affinity-nodeport-q859k: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.20" in 453.182413ms May 4 16:11:20.009: INFO: At 2021-05-04 16:09:00 +0000 UTC - event for affinity-nodeport-q859k: {kubelet node1} Created: Created container affinity-nodeport May 4 16:11:20.009: INFO: At 2021-05-04 16:09:00 +0000 UTC - event for affinity-nodeport-tmr9l: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.20" May 4 16:11:20.009: INFO: At 2021-05-04 16:09:00 +0000 UTC - event for affinity-nodeport-tmr9l: {multus } AddedInterface: Add eth0 [10.244.3.162/24] May 4 16:11:20.009: INFO: At 2021-05-04 16:09:00 +0000 UTC - event for affinity-nodeport-vjvq8: {kubelet node2} Created: Created container affinity-nodeport May 4 16:11:20.009: INFO: At 2021-05-04 16:09:00 +0000 UTC - event for affinity-nodeport-vjvq8: {kubelet node2} Started: Started container affinity-nodeport May 4 16:11:20.009: INFO: At 2021-05-04 16:09:01 +0000 UTC - event for affinity-nodeport-tmr9l: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.20" in 1.566181061s May 4 16:11:20.009: INFO: At 2021-05-04 16:09:02 +0000 UTC - event for affinity-nodeport-tmr9l: {kubelet node2} Created: Created container affinity-nodeport May 4 16:11:20.009: INFO: At 2021-05-04 16:09:02 +0000 UTC - event for affinity-nodeport-tmr9l: {kubelet node2} Started: Started container affinity-nodeport May 4 16:11:20.009: INFO: At 2021-05-04 16:09:06 +0000 UTC - event for execpod-affinityl8j2v: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.20" in 416.816571ms May 4 16:11:20.009: INFO: At 2021-05-04 16:09:06 +0000 UTC - event for execpod-affinityl8j2v: {kubelet node2} Created: Created container agnhost-container May 4 16:11:20.009: INFO: At 2021-05-04 16:09:06 +0000 UTC - event for execpod-affinityl8j2v: {kubelet node2} Started: Started container agnhost-container May 4 16:11:20.009: INFO: At 2021-05-04 16:09:06 +0000 UTC - event for execpod-affinityl8j2v: {multus } AddedInterface: Add eth0 [10.244.3.165/24] May 4 16:11:20.009: INFO: At 2021-05-04 16:09:06 +0000 UTC - event for execpod-affinityl8j2v: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.20" May 4 16:11:20.009: INFO: At 2021-05-04 16:11:10 +0000 UTC - event for affinity-nodeport-q859k: {kubelet node1} Killing: Stopping container affinity-nodeport May 4 16:11:20.009: INFO: At 2021-05-04 16:11:10 +0000 UTC - event for affinity-nodeport-tmr9l: {kubelet node2} Killing: Stopping container affinity-nodeport May 4 16:11:20.009: INFO: At 2021-05-04 16:11:10 +0000 UTC - event for affinity-nodeport-vjvq8: {kubelet node2} Killing: Stopping container affinity-nodeport May 4 16:11:20.009: INFO: At 2021-05-04 16:11:10 +0000 UTC - event for execpod-affinityl8j2v: {kubelet node2} Killing: Stopping container agnhost-container May 4 16:11:20.011: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:11:20.011: INFO: May 4 16:11:20.016: INFO: Logging node info for node master1 May 4 16:11:20.018: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 db982204-549e-4532-90a7-a4410878cfc9 34592 0 2021-05-04 14:43:01 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"3e:f0:43:cb:66:52"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-04 14:51:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:46 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:11:17 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:11:17 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:11:17 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:11:17 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:88a0771919594d4187f6704fc7592bf8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8e0a253b-2aa4-4467-879e-567e7ba1ffa4,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:11:20.019: INFO: Logging kubelet events for node master1 May 4 16:11:20.021: INFO: Logging pods the kubelet thinks is on node master1 May 4 16:11:20.030: INFO: kube-controller-manager-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.030: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:11:20.030: INFO: kube-proxy-8j6ch started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.030: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:11:20.030: INFO: docker-registry-docker-registry-56cbc7bc58-zhf8t started at 2021-05-04 14:48:42 +0000 UTC (0+2 container statuses recorded) May 4 16:11:20.030: INFO: Container docker-registry ready: true, restart count 0 May 4 16:11:20.030: INFO: Container nginx ready: true, restart count 0 May 4 16:11:20.030: INFO: node-exporter-jckjs started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:11:20.030: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:11:20.030: INFO: Container node-exporter ready: true, restart count 0 May 4 16:11:20.030: INFO: kube-scheduler-master1 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.030: INFO: Container kube-scheduler ready: true, restart count 0 May 4 16:11:20.030: INFO: kube-apiserver-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.030: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:11:20.030: INFO: kube-flannel-qspzk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:11:20.030: INFO: Init container install-cni ready: true, restart count 0 May 4 16:11:20.030: INFO: Container kube-flannel ready: true, restart count 3 May 4 16:11:20.030: INFO: kube-multus-ds-amd64-jflvf started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.030: INFO: Container kube-multus ready: true, restart count 1 May 4 16:11:20.030: INFO: coredns-7677f9bb54-qvcd2 started at 2021-05-04 14:46:11 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.030: INFO: Container coredns ready: true, restart count 1 May 4 16:11:20.030: INFO: node-feature-discovery-controller-5bf5c49849-72rn6 started at 2021-05-04 14:51:52 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.030: INFO: Container nfd-controller ready: true, restart count 0 W0504 16:11:20.043078 24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:11:20.067: INFO: Latency metrics for node master1 May 4 16:11:20.067: INFO: Logging node info for node master2 May 4 16:11:20.069: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 e2c15170-247b-4e7b-b818-abc807948bf8 34565 0 2021-05-04 14:43:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:e0:10:a0:e0:62"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:32 +0000 UTC,LastTransitionTime:2021-05-04 14:47:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:11:16 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:11:16 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:11:16 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:11:16 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6af568f56589422a9bd68e0270ce0f8c,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:bf27bb77-fad2-4b52-85c3-acb5113fc512,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:11:20.069: INFO: Logging kubelet events for node master2 May 4 16:11:20.071: INFO: Logging pods the kubelet thinks is on node master2 May 4 16:11:20.080: INFO: kube-multus-ds-amd64-dw8tg started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.080: INFO: Container kube-multus ready: true, restart count 1 May 4 16:11:20.080: INFO: dns-autoscaler-5b7b5c9b6f-zbrsq started at 2021-05-04 14:46:08 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.080: INFO: Container autoscaler ready: true, restart count 1 May 4 16:11:20.080: INFO: node-exporter-9c6qf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:11:20.080: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:11:20.080: INFO: Container node-exporter ready: true, restart count 0 May 4 16:11:20.080: INFO: kube-apiserver-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.080: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:11:20.080: INFO: kube-controller-manager-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.080: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:11:20.080: INFO: kube-scheduler-master2 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.080: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:11:20.080: INFO: kube-proxy-6b5t8 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.080: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:11:20.080: INFO: kube-flannel-cxdfr started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:11:20.080: INFO: Init container install-cni ready: true, restart count 0 May 4 16:11:20.080: INFO: Container kube-flannel ready: true, restart count 1 W0504 16:11:20.094025 24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:11:20.117: INFO: Latency metrics for node master2 May 4 16:11:20.117: INFO: Logging node info for node master3 May 4 16:11:20.120: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 b533a646-667e-403c-944b-71dec9cc4851 34562 0 2021-05-04 14:43:51 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:b0:53:14:f6:c9"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:48:50 +0000 UTC,LastTransitionTime:2021-05-04 14:48:50 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:11:16 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:11:16 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:11:16 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:11:16 +0000 UTC,LastTransitionTime:2021-05-04 14:45:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:959373dcda56494486f0c2bb0bb496cc,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:0714ca81-c21e-40d6-a288-48d597238e54,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:11:20.120: INFO: Logging kubelet events for node master3 May 4 16:11:20.122: INFO: Logging pods the kubelet thinks is on node master3 May 4 16:11:20.130: INFO: kube-apiserver-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.130: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:11:20.130: INFO: kube-controller-manager-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.130: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:11:20.130: INFO: kube-scheduler-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.130: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:11:20.130: INFO: kube-proxy-2p5b6 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.130: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:11:20.130: INFO: kube-flannel-wznt8 started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:11:20.130: INFO: Init container install-cni ready: true, restart count 0 May 4 16:11:20.130: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:11:20.130: INFO: kube-multus-ds-amd64-cgwz2 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.130: INFO: Container kube-multus ready: true, restart count 1 May 4 16:11:20.130: INFO: coredns-7677f9bb54-pshfb started at 2021-05-04 14:46:06 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.130: INFO: Container coredns ready: true, restart count 1 May 4 16:11:20.130: INFO: node-exporter-wvppn started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:11:20.130: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:11:20.130: INFO: Container node-exporter ready: true, restart count 0 W0504 16:11:20.143356 24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:11:20.167: INFO: Latency metrics for node master3 May 4 16:11:20.167: INFO: Logging node info for node node1 May 4 16:11:20.170: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 b8cf7e16-d5c7-4e2c-996a-93d93bd4fa1c 34607 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"e2:50:df:03:d2:13"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:54:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:11:18 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:11:18 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:11:18 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:11:18 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bffc023a4ab84df0b0181bc7b8f509e2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:dc08af72-abca-4f1d-bd0f-0e8d8eb97de5,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[@ :],SizeBytes:1002569035,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:371dc6bf7e0c7ce112a29341b000c40d840aef1dbb4fdcb3ae5c0597e28f3061 golang:alpine3.12],SizeBytes:301097267,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:11:20.170: INFO: Logging kubelet events for node node1 May 4 16:11:20.173: INFO: Logging pods the kubelet thinks is on node node1 May 4 16:11:20.187: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.187: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 16:11:20.188: INFO: cmk-slg76 started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:11:20.188: INFO: Container nodereport ready: true, restart count 0 May 4 16:11:20.188: INFO: Container reconcile ready: true, restart count 0 May 4 16:11:20.188: INFO: prometheus-k8s-0 started at 2021-05-04 14:56:12 +0000 UTC (0+5 container statuses recorded) May 4 16:11:20.188: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:11:20.188: INFO: Container grafana ready: true, restart count 0 May 4 16:11:20.188: INFO: Container prometheus ready: true, restart count 1 May 4 16:11:20.188: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:11:20.188: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 4 16:11:20.188: INFO: kube-flannel-d6pbl started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:11:20.188: INFO: Init container install-cni ready: true, restart count 2 May 4 16:11:20.188: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:11:20.188: INFO: node-feature-discovery-worker-wfgl5 started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.188: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:11:20.188: INFO: prometheus-operator-5bb8cb9d8f-rrrhf started at 2021-05-04 14:56:03 +0000 UTC (0+2 container statuses recorded) May 4 16:11:20.188: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:11:20.188: INFO: Container prometheus-operator ready: true, restart count 0 May 4 16:11:20.188: INFO: node-exporter-k8qd9 started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:11:20.188: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:11:20.188: INFO: Container node-exporter ready: true, restart count 0 May 4 16:11:20.188: INFO: collectd-4755t started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:11:20.188: INFO: Container collectd ready: true, restart count 0 May 4 16:11:20.188: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:11:20.188: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:11:20.188: INFO: pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40 started at 2021-05-04 16:09:08 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.188: INFO: Container env-test ready: false, restart count 0 May 4 16:11:20.188: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.188: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:11:20.188: INFO: nginx-proxy-node1 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.188: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:11:20.188: INFO: kube-proxy-t2mbn started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.188: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:11:20.188: INFO: liveness-http started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.188: INFO: Container liveness-http ready: false, restart count 15 May 4 16:11:20.188: INFO: server-envvars-e2e8d4b8-6525-4f40-9a98-8cccf5c227b4 started at 2021-05-04 16:10:40 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.188: INFO: Container srv ready: true, restart count 0 May 4 16:11:20.188: INFO: client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49 started at 2021-05-04 16:10:44 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.188: INFO: Container env3cont ready: false, restart count 0 May 4 16:11:20.188: INFO: cmk-init-discover-node1-m8vvw started at 2021-05-04 14:54:32 +0000 UTC (0+3 container statuses recorded) May 4 16:11:20.188: INFO: Container discover ready: false, restart count 0 May 4 16:11:20.188: INFO: Container init ready: false, restart count 0 May 4 16:11:20.188: INFO: Container install ready: false, restart count 0 May 4 16:11:20.188: INFO: downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b started at 2021-05-04 16:09:46 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.188: INFO: Container dapi-container ready: false, restart count 0 May 4 16:11:20.188: INFO: simpletest.deployment-7f7555f8bc-dnbdp started at 2021-05-04 16:10:20 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.188: INFO: Container nginx ready: false, restart count 0 May 4 16:11:20.188: INFO: kube-multus-ds-amd64-pkmbz started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.188: INFO: Container kube-multus ready: true, restart count 1 May 4 16:11:20.188: INFO: ss2-0 started at 2021-05-04 16:09:26 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.188: INFO: Container webserver ready: false, restart count 0 May 4 16:11:20.188: INFO: condition-test-d9cmm started at 2021-05-04 16:11:15 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.188: INFO: Container httpd ready: false, restart count 0 W0504 16:11:20.200717 24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:11:20.250: INFO: Latency metrics for node node1 May 4 16:11:20.250: INFO: Logging node info for node node2 May 4 16:11:20.253: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 41567fa7-bb24-4381-9387-e4115195037d 34608 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:98:f5:3b:98:5c"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:11:18 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:11:18 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:11:18 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:11:18 +0000 UTC,LastTransitionTime:2021-05-04 14:45:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d47a1c7ab17f44f2ae7ff788700a8d74,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:df3523a1-a74f-4f8b-beb1-29f5ed8699f3,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:11:20.254: INFO: Logging kubelet events for node node2 May 4 16:11:20.256: INFO: Logging pods the kubelet thinks is on node node2 May 4 16:11:20.273: INFO: kube-multus-ds-amd64-7r2s4 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.273: INFO: Container kube-multus ready: true, restart count 1 May 4 16:11:20.273: INFO: cmk-init-discover-node2-zlxzj started at 2021-05-04 14:54:52 +0000 UTC (0+3 container statuses recorded) May 4 16:11:20.273: INFO: Container discover ready: false, restart count 0 May 4 16:11:20.273: INFO: Container init ready: false, restart count 0 May 4 16:11:20.273: INFO: Container install ready: false, restart count 0 May 4 16:11:20.273: INFO: collectd-dhwfp started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:11:20.273: INFO: Container collectd ready: true, restart count 0 May 4 16:11:20.273: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:11:20.273: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:11:20.273: INFO: var-expansion-11a0463e-02e0-4f42-95b7-041fc5123e73 started at 2021-05-04 16:09:35 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.273: INFO: Container dapi-container ready: false, restart count 0 May 4 16:11:20.273: INFO: e2e-test-httpd-pod started at 2021-05-04 16:11:06 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.273: INFO: Container e2e-test-httpd-pod ready: false, restart count 0 May 4 16:11:20.273: INFO: nginx-proxy-node2 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.273: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:11:20.273: INFO: pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524 started at 2021-05-04 16:11:17 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.273: INFO: Container env-test ready: false, restart count 0 May 4 16:11:20.273: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.273: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 4 16:11:20.273: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.273: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:11:20.273: INFO: cmk-2fmbx started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:11:20.273: INFO: Container nodereport ready: true, restart count 0 May 4 16:11:20.273: INFO: Container reconcile ready: true, restart count 0 May 4 16:11:20.273: INFO: condition-test-22v4c started at 2021-05-04 16:11:15 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.273: INFO: Container httpd ready: false, restart count 0 May 4 16:11:20.273: INFO: downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30 started at 2021-05-04 16:10:01 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.273: INFO: Container dapi-container ready: false, restart count 0 May 4 16:11:20.273: INFO: kube-flannel-lnwkk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:11:20.273: INFO: Init container install-cni ready: true, restart count 2 May 4 16:11:20.273: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:11:20.273: INFO: cmk-webhook-6c9d5f8578-fr595 started at 2021-05-04 14:55:15 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.273: INFO: Container cmk-webhook ready: true, restart count 0 May 4 16:11:20.273: INFO: simpletest.deployment-7f7555f8bc-sbx99 started at 2021-05-04 16:10:20 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.273: INFO: Container nginx ready: false, restart count 0 May 4 16:11:20.273: INFO: ss2-1 started at 2021-05-04 16:08:40 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.273: INFO: Container webserver ready: true, restart count 0 May 4 16:11:20.273: INFO: kube-proxy-rfjjf started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.273: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:11:20.273: INFO: node-feature-discovery-worker-jzjqs started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.273: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:11:20.273: INFO: node-exporter-5lghf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:11:20.273: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:11:20.273: INFO: Container node-exporter ready: true, restart count 0 May 4 16:11:20.274: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x started at 2021-05-04 14:59:02 +0000 UTC (0+2 container statuses recorded) May 4 16:11:20.274: INFO: Container tas-controller ready: true, restart count 0 May 4 16:11:20.274: INFO: Container tas-extender ready: true, restart count 0 May 4 16:11:20.274: INFO: liveness-exec started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:11:20.274: INFO: Container liveness-exec ready: false, restart count 6 W0504 16:11:20.287305 24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:11:20.315: INFO: Latency metrics for node node2 May 4 16:11:20.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4716" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • Failure [142.445 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:11:10.823: Unexpected error: <*errors.errorString | 0xc0031c8190>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31011 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31011 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3511 ------------------------------ {"msg":"FAILED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":13,"skipped":219,"failed":2,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:10:20.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0504 16:10:21.738784 35 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:11:23.756: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:11:23.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5065" for this suite. • [SLOW TEST:63.089 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":14,"skipped":248,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:11:23.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:11:23.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1397" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":15,"skipped":298,"failed":0} S ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:11:23.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info May 4 16:11:23.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3133 cluster-info' May 4 16:11:24.118: INFO: stderr: "" May 4 16:11:24.118: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://10.10.190.202:6443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:11:24.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3133" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":-1,"completed":16,"skipped":299,"failed":0} SSSSS ------------------------------ [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:09:35.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:11:35.260: INFO: Deleting pod "var-expansion-11a0463e-02e0-4f42-95b7-041fc5123e73" in namespace "var-expansion-9645" May 4 16:11:35.265: INFO: Wait up to 5m0s for pod "var-expansion-11a0463e-02e0-4f42-95b7-041fc5123e73" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:11:37.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9645" for this suite. • [SLOW TEST:122.059 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:11:24.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 4 16:11:24.159: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. May 4 16:11:24.528: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 4 16:11:26.556: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741484, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741484, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741484, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741484, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 16:11:28.561: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741484, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741484, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741484, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741484, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 16:11:30.561: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741484, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741484, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741484, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741484, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 16:11:32.559: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741484, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741484, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741484, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741484, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 16:11:34.561: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741484, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741484, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741484, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741484, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 16:11:38.572: INFO: Waited 2.007356147s for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:11:39.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-8403" for this suite. • [SLOW TEST:15.327 seconds] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":17,"skipped":304,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:11:39.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy May 4 16:11:39.547: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-185 proxy --unix-socket=/tmp/kubectl-proxy-unix688425334/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:11:39.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-185" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":-1,"completed":18,"skipped":331,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:11:39.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 4 16:11:40.104: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 4 16:11:42.114: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741500, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741500, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741500, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741500, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 4 16:11:45.124: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:11:45.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4064-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:11:51.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7466" for this suite. STEP: Destroying namespace "webhook-7466-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.562 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":19,"skipped":336,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:09:08.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-1094/configmap-test-2b60f89b-09a5-4f40-a396-b7fd56956c3e STEP: Creating a pod to test consume configMaps May 4 16:09:08.927: INFO: Waiting up to 5m0s for pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40" in namespace "configmap-1094" to be "Succeeded or Failed" May 4 16:09:08.929: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 1.891187ms May 4 16:09:10.932: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00447104s May 4 16:09:12.935: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007617687s May 4 16:09:14.941: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014099168s May 4 16:09:16.944: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 8.01678012s May 4 16:09:18.947: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 10.019529723s May 4 16:09:20.950: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 12.022603659s May 4 16:09:22.952: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 14.025109738s May 4 16:09:24.956: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 16.0290814s May 4 16:09:26.959: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 18.031596572s May 4 16:09:28.962: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 20.034815151s May 4 16:09:30.965: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 22.037990674s May 4 16:09:32.968: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 24.040517861s May 4 16:09:34.970: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 26.042971176s May 4 16:09:36.974: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 28.046287362s May 4 16:09:38.976: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 30.049145031s May 4 16:09:40.979: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 32.051765765s May 4 16:09:42.982: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 34.054781951s May 4 16:09:44.986: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 36.05851275s May 4 16:09:46.989: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 38.061849122s May 4 16:09:48.994: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 40.066280742s May 4 16:09:50.997: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 42.069911227s May 4 16:09:53.001: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 44.073773171s May 4 16:09:55.004: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 46.07628513s May 4 16:09:57.006: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 48.079016403s May 4 16:09:59.010: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 50.082184034s May 4 16:10:01.012: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 52.085088443s May 4 16:10:03.016: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 54.08841072s May 4 16:10:05.018: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 56.091091822s May 4 16:10:07.021: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 58.093889472s May 4 16:10:09.025: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.097337518s May 4 16:10:11.028: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.100510679s May 4 16:10:13.031: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.103885989s May 4 16:10:15.034: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.107000436s May 4 16:10:17.038: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.110178561s May 4 16:10:19.040: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.112956178s May 4 16:10:21.044: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.116514355s May 4 16:10:23.047: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.119348437s May 4 16:10:25.050: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.122302988s May 4 16:10:27.054: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.126516779s May 4 16:10:29.057: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.129755048s May 4 16:10:31.061: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.133948998s May 4 16:10:33.065: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.138024253s May 4 16:10:35.069: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.141415141s May 4 16:10:37.071: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.14383209s May 4 16:10:39.075: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.14730805s May 4 16:10:41.077: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.150085212s May 4 16:10:43.081: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.153461578s May 4 16:10:45.084: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.157081739s May 4 16:10:47.088: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.160201925s May 4 16:10:49.091: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.163408489s May 4 16:10:51.094: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.166262285s May 4 16:10:53.097: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.169232718s May 4 16:10:55.100: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.172252735s May 4 16:10:57.102: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.174933162s May 4 16:10:59.106: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.178402235s May 4 16:11:01.109: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.181433941s May 4 16:11:03.112: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.184306605s May 4 16:11:05.115: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.187525519s May 4 16:11:07.118: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.190912192s May 4 16:11:09.123: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.195159052s May 4 16:11:11.125: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.197840253s May 4 16:11:13.129: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.201436701s May 4 16:11:15.132: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.204456057s May 4 16:11:17.136: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.208910651s May 4 16:11:19.142: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.214247548s May 4 16:11:21.145: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.217940342s May 4 16:11:23.149: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.221971499s May 4 16:11:25.154: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.226674616s May 4 16:11:27.159: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.231185588s May 4 16:11:29.163: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.235332186s May 4 16:11:31.166: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.238484366s May 4 16:11:33.169: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.242060002s May 4 16:11:35.174: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.246978491s May 4 16:11:37.177: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.250151684s May 4 16:11:39.183: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.255266291s May 4 16:11:41.186: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.25911919s May 4 16:11:43.190: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.26292373s May 4 16:11:45.193: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.265496634s May 4 16:11:47.197: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.269211655s May 4 16:11:49.200: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.272316269s May 4 16:11:51.203: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.275266461s May 4 16:11:53.206: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.278166347s May 4 16:11:55.209: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.281511435s May 4 16:11:57.213: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.28516946s May 4 16:11:59.215: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.28809841s May 4 16:12:01.218: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.290798803s May 4 16:12:03.221: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.293502622s May 4 16:12:05.224: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.296284722s May 4 16:12:07.227: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.299268439s May 4 16:12:09.230: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.302284882s May 4 16:12:11.233: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.305958568s May 4 16:12:13.236: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.309014454s May 4 16:12:15.239: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.31179347s May 4 16:12:17.242: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.314337051s May 4 16:12:19.245: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.317793323s May 4 16:12:21.248: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.320832505s May 4 16:12:23.251: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.323893137s May 4 16:12:25.255: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.327849889s May 4 16:12:27.259: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.331172992s May 4 16:12:29.261: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.33410357s May 4 16:12:31.264: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.337130478s May 4 16:12:33.268: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.340462286s May 4 16:12:35.273: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.345208701s May 4 16:12:37.276: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.348294354s May 4 16:12:39.279: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.35152233s May 4 16:12:41.282: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.354201574s May 4 16:12:43.285: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.357967809s May 4 16:12:45.288: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.36100984s May 4 16:12:47.291: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.364092366s May 4 16:12:49.295: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.367278573s May 4 16:12:51.299: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.371853427s May 4 16:12:53.302: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.374673232s May 4 16:12:55.308: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.380417123s May 4 16:12:57.310: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.383100451s May 4 16:12:59.314: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.386479686s May 4 16:13:01.317: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.389894612s May 4 16:13:03.320: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.392941858s May 4 16:13:05.323: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.395952248s May 4 16:13:07.326: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.39829827s May 4 16:13:09.330: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.402413791s May 4 16:13:11.333: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.405985925s May 4 16:13:13.337: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.409293549s May 4 16:13:15.341: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.413554162s May 4 16:13:17.344: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.416879392s May 4 16:13:19.348: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.420828918s May 4 16:13:21.352: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.424952623s May 4 16:13:23.358: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.431005927s May 4 16:13:25.361: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.434117667s May 4 16:13:27.365: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.437435339s May 4 16:13:29.368: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.441018449s May 4 16:13:31.371: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.443906975s May 4 16:13:33.375: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.447602627s May 4 16:13:35.378: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.450513816s May 4 16:13:37.381: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.453919358s May 4 16:13:39.384: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.456553109s May 4 16:13:41.387: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.459627668s May 4 16:13:43.390: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.462887466s May 4 16:13:45.393: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.46597396s May 4 16:13:47.396: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.468856319s May 4 16:13:49.399: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.471423481s May 4 16:13:51.402: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.474799998s May 4 16:13:53.406: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.47841117s May 4 16:13:55.409: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.481250067s May 4 16:13:57.412: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.484191074s May 4 16:13:59.415: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.487530384s May 4 16:14:01.419: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.491273858s May 4 16:14:03.423: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.495545057s May 4 16:14:05.427: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.499632746s May 4 16:14:07.430: INFO: Pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.502800408s May 4 16:14:09.445: INFO: Failed to get logs from node "node1" pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40" container "env-test": the server rejected our request for an unknown reason (get pods pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40) STEP: delete the pod May 4 16:14:09.450: INFO: Waiting for pod pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40 to disappear May 4 16:14:09.452: INFO: Pod pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40 still exists May 4 16:14:11.453: INFO: Waiting for pod pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40 to disappear May 4 16:14:11.455: INFO: Pod pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40 no longer exists May 4 16:14:11.456: FAIL: Unexpected error: <*errors.errorString | 0xc003c5c610>: { s: "expected pod \"pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40\" success: Gave up after waiting 5m0s for pod \"pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40\" to be \"Succeeded or Failed\"", } expected pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40" success: Gave up after waiting 5m0s for pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40" to be "Succeeded or Failed" occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc00059e420, 0x4c18c6e, 0x12, 0xc000781400, 0x0, 0xc001563158, 0x1, 0x1, 0x4de7488) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725 +0x1ee k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:525 k8s.io/kubernetes/test/e2e/common.glob..func1.1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:80 +0x8a5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000179e00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc000179e00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc000179e00, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "configmap-1094". STEP: Found 10 events. May 4 16:14:11.461: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40: { } Scheduled: Successfully assigned configmap-1094/pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40 to node1 May 4 16:14:11.461: INFO: At 2021-05-04 16:09:11 +0000 UTC - event for pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40: {multus } AddedInterface: Add eth0 [10.244.4.131/24] May 4 16:14:11.461: INFO: At 2021-05-04 16:09:11 +0000 UTC - event for pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 4 16:14:11.461: INFO: At 2021-05-04 16:09:13 +0000 UTC - event for pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 4 16:14:11.461: INFO: At 2021-05-04 16:09:13 +0000 UTC - event for pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40: {kubelet node1} Failed: Error: ErrImagePull May 4 16:14:11.461: INFO: At 2021-05-04 16:09:13 +0000 UTC - event for pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 4 16:14:11.461: INFO: At 2021-05-04 16:09:15 +0000 UTC - event for pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40: {multus } AddedInterface: Add eth0 [10.244.4.132/24] May 4 16:14:11.461: INFO: At 2021-05-04 16:09:15 +0000 UTC - event for pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 4 16:14:11.461: INFO: At 2021-05-04 16:09:15 +0000 UTC - event for pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40: {kubelet node1} Failed: Error: ImagePullBackOff May 4 16:14:11.461: INFO: At 2021-05-04 16:09:30 +0000 UTC - event for pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 4 16:14:11.463: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:14:11.463: INFO: May 4 16:14:11.467: INFO: Logging node info for node master1 May 4 16:14:11.470: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 db982204-549e-4532-90a7-a4410878cfc9 35798 0 2021-05-04 14:43:01 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"3e:f0:43:cb:66:52"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-04 14:51:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:46 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:07 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:07 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:07 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:14:07 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:88a0771919594d4187f6704fc7592bf8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8e0a253b-2aa4-4467-879e-567e7ba1ffa4,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:14:11.470: INFO: Logging kubelet events for node master1 May 4 16:14:11.472: INFO: Logging pods the kubelet thinks is on node master1 May 4 16:14:11.489: INFO: kube-flannel-qspzk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:14:11.489: INFO: Init container install-cni ready: true, restart count 0 May 4 16:14:11.489: INFO: Container kube-flannel ready: true, restart count 3 May 4 16:14:11.489: INFO: kube-multus-ds-amd64-jflvf started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.489: INFO: Container kube-multus ready: true, restart count 1 May 4 16:14:11.489: INFO: coredns-7677f9bb54-qvcd2 started at 2021-05-04 14:46:11 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.489: INFO: Container coredns ready: true, restart count 1 May 4 16:14:11.490: INFO: node-feature-discovery-controller-5bf5c49849-72rn6 started at 2021-05-04 14:51:52 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.490: INFO: Container nfd-controller ready: true, restart count 0 May 4 16:14:11.490: INFO: kube-controller-manager-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.490: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:14:11.490: INFO: kube-proxy-8j6ch started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.490: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:14:11.490: INFO: docker-registry-docker-registry-56cbc7bc58-zhf8t started at 2021-05-04 14:48:42 +0000 UTC (0+2 container statuses recorded) May 4 16:14:11.490: INFO: Container docker-registry ready: true, restart count 0 May 4 16:14:11.490: INFO: Container nginx ready: true, restart count 0 May 4 16:14:11.490: INFO: node-exporter-jckjs started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:14:11.490: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:14:11.490: INFO: Container node-exporter ready: true, restart count 0 May 4 16:14:11.490: INFO: kube-scheduler-master1 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.490: INFO: Container kube-scheduler ready: true, restart count 0 May 4 16:14:11.490: INFO: kube-apiserver-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.490: INFO: Container kube-apiserver ready: true, restart count 0 W0504 16:14:11.503015 34 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:14:11.533: INFO: Latency metrics for node master1 May 4 16:14:11.533: INFO: Logging node info for node master2 May 4 16:14:11.535: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 e2c15170-247b-4e7b-b818-abc807948bf8 35797 0 2021-05-04 14:43:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:e0:10:a0:e0:62"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:32 +0000 UTC,LastTransitionTime:2021-05-04 14:47:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:07 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:07 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:07 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:14:07 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6af568f56589422a9bd68e0270ce0f8c,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:bf27bb77-fad2-4b52-85c3-acb5113fc512,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:14:11.536: INFO: Logging kubelet events for node master2 May 4 16:14:11.538: INFO: Logging pods the kubelet thinks is on node master2 May 4 16:14:11.551: INFO: kube-controller-manager-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.551: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:14:11.551: INFO: kube-scheduler-master2 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.551: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:14:11.551: INFO: kube-proxy-6b5t8 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.551: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:14:11.551: INFO: kube-flannel-cxdfr started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:14:11.551: INFO: Init container install-cni ready: true, restart count 0 May 4 16:14:11.551: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:14:11.551: INFO: kube-multus-ds-amd64-dw8tg started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.551: INFO: Container kube-multus ready: true, restart count 1 May 4 16:14:11.551: INFO: dns-autoscaler-5b7b5c9b6f-zbrsq started at 2021-05-04 14:46:08 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.551: INFO: Container autoscaler ready: true, restart count 1 May 4 16:14:11.552: INFO: node-exporter-9c6qf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:14:11.552: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:14:11.552: INFO: Container node-exporter ready: true, restart count 0 May 4 16:14:11.552: INFO: kube-apiserver-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.552: INFO: Container kube-apiserver ready: true, restart count 0 W0504 16:14:11.562833 34 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:14:11.588: INFO: Latency metrics for node master2 May 4 16:14:11.588: INFO: Logging node info for node master3 May 4 16:14:11.591: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 b533a646-667e-403c-944b-71dec9cc4851 35796 0 2021-05-04 14:43:51 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:b0:53:14:f6:c9"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:48:50 +0000 UTC,LastTransitionTime:2021-05-04 14:48:50 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:07 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:07 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:07 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:14:07 +0000 UTC,LastTransitionTime:2021-05-04 14:45:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:959373dcda56494486f0c2bb0bb496cc,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:0714ca81-c21e-40d6-a288-48d597238e54,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:14:11.591: INFO: Logging kubelet events for node master3 May 4 16:14:11.594: INFO: Logging pods the kubelet thinks is on node master3 May 4 16:14:11.610: INFO: coredns-7677f9bb54-pshfb started at 2021-05-04 14:46:06 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.610: INFO: Container coredns ready: true, restart count 1 May 4 16:14:11.610: INFO: node-exporter-wvppn started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:14:11.610: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:14:11.610: INFO: Container node-exporter ready: true, restart count 0 May 4 16:14:11.610: INFO: kube-apiserver-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.610: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:14:11.610: INFO: kube-controller-manager-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.610: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:14:11.610: INFO: kube-scheduler-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.610: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:14:11.610: INFO: kube-proxy-2p5b6 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.610: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:14:11.610: INFO: kube-flannel-wznt8 started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:14:11.610: INFO: Init container install-cni ready: true, restart count 0 May 4 16:14:11.610: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:14:11.610: INFO: kube-multus-ds-amd64-cgwz2 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.610: INFO: Container kube-multus ready: true, restart count 1 W0504 16:14:11.623932 34 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:14:11.647: INFO: Latency metrics for node master3 May 4 16:14:11.647: INFO: Logging node info for node node1 May 4 16:14:11.650: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 b8cf7e16-d5c7-4e2c-996a-93d93bd4fa1c 35806 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"e2:50:df:03:d2:13"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:54:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:09 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:09 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:09 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:14:09 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bffc023a4ab84df0b0181bc7b8f509e2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:dc08af72-abca-4f1d-bd0f-0e8d8eb97de5,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[@ :],SizeBytes:1002569035,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:371dc6bf7e0c7ce112a29341b000c40d840aef1dbb4fdcb3ae5c0597e28f3061 golang:alpine3.12],SizeBytes:301097267,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:14:11.650: INFO: Logging kubelet events for node node1 May 4 16:14:11.652: INFO: Logging pods the kubelet thinks is on node node1 May 4 16:14:11.667: INFO: collectd-4755t started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:14:11.667: INFO: Container collectd ready: true, restart count 0 May 4 16:14:11.667: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:14:11.667: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:14:11.667: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.667: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:14:11.667: INFO: busybox-scheduling-4b838859-f880-4cc3-9f50-deaf16217eda started at 2021-05-04 16:11:20 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.667: INFO: Container busybox-scheduling-4b838859-f880-4cc3-9f50-deaf16217eda ready: false, restart count 0 May 4 16:14:11.667: INFO: server-envvars-e2e8d4b8-6525-4f40-9a98-8cccf5c227b4 started at 2021-05-04 16:10:40 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.667: INFO: Container srv ready: true, restart count 0 May 4 16:14:11.667: INFO: client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49 started at 2021-05-04 16:10:44 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.667: INFO: Container env3cont ready: false, restart count 0 May 4 16:14:11.667: INFO: nginx-proxy-node1 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.667: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:14:11.667: INFO: kube-proxy-t2mbn started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.667: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:14:11.667: INFO: liveness-http started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.667: INFO: Container liveness-http ready: false, restart count 15 May 4 16:14:11.667: INFO: cmk-init-discover-node1-m8vvw started at 2021-05-04 14:54:32 +0000 UTC (0+3 container statuses recorded) May 4 16:14:11.667: INFO: Container discover ready: false, restart count 0 May 4 16:14:11.667: INFO: Container init ready: false, restart count 0 May 4 16:14:11.667: INFO: Container install ready: false, restart count 0 May 4 16:14:11.667: INFO: downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b started at 2021-05-04 16:09:46 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.667: INFO: Container dapi-container ready: false, restart count 0 May 4 16:14:11.667: INFO: kube-multus-ds-amd64-pkmbz started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.667: INFO: Container kube-multus ready: true, restart count 1 May 4 16:14:11.667: INFO: ss2-0 started at 2021-05-04 16:09:26 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.667: INFO: Container webserver ready: false, restart count 0 May 4 16:14:11.667: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.667: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 16:14:11.667: INFO: cmk-slg76 started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:14:11.667: INFO: Container nodereport ready: true, restart count 0 May 4 16:14:11.667: INFO: Container reconcile ready: true, restart count 0 May 4 16:14:11.667: INFO: prometheus-k8s-0 started at 2021-05-04 14:56:12 +0000 UTC (0+5 container statuses recorded) May 4 16:14:11.667: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:14:11.667: INFO: Container grafana ready: true, restart count 0 May 4 16:14:11.667: INFO: Container prometheus ready: true, restart count 1 May 4 16:14:11.667: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:14:11.667: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 4 16:14:11.667: INFO: node-exporter-k8qd9 started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:14:11.667: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:14:11.667: INFO: Container node-exporter ready: true, restart count 0 May 4 16:14:11.667: INFO: kube-flannel-d6pbl started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:14:11.667: INFO: Init container install-cni ready: true, restart count 2 May 4 16:14:11.667: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:14:11.667: INFO: node-feature-discovery-worker-wfgl5 started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.667: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:14:11.667: INFO: prometheus-operator-5bb8cb9d8f-rrrhf started at 2021-05-04 14:56:03 +0000 UTC (0+2 container statuses recorded) May 4 16:14:11.667: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:14:11.667: INFO: Container prometheus-operator ready: true, restart count 0 W0504 16:14:11.678935 34 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:14:11.727: INFO: Latency metrics for node node1 May 4 16:14:11.727: INFO: Logging node info for node node2 May 4 16:14:11.729: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 41567fa7-bb24-4381-9387-e4115195037d 35805 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:98:f5:3b:98:5c"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:09 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:09 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:09 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:14:09 +0000 UTC,LastTransitionTime:2021-05-04 14:45:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d47a1c7ab17f44f2ae7ff788700a8d74,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:df3523a1-a74f-4f8b-beb1-29f5ed8699f3,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:14:11.730: INFO: Logging kubelet events for node node2 May 4 16:14:11.732: INFO: Logging pods the kubelet thinks is on node node2 May 4 16:14:11.751: INFO: kube-multus-ds-amd64-7r2s4 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.751: INFO: Container kube-multus ready: true, restart count 1 May 4 16:14:11.751: INFO: cmk-init-discover-node2-zlxzj started at 2021-05-04 14:54:52 +0000 UTC (0+3 container statuses recorded) May 4 16:14:11.751: INFO: Container discover ready: false, restart count 0 May 4 16:14:11.751: INFO: Container init ready: false, restart count 0 May 4 16:14:11.751: INFO: Container install ready: false, restart count 0 May 4 16:14:11.751: INFO: collectd-dhwfp started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:14:11.751: INFO: Container collectd ready: true, restart count 0 May 4 16:14:11.751: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:14:11.751: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:14:11.751: INFO: var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c started at 2021-05-04 16:11:37 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.751: INFO: Container dapi-container ready: false, restart count 0 May 4 16:14:11.751: INFO: e2e-test-httpd-pod started at 2021-05-04 16:11:06 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.751: INFO: Container e2e-test-httpd-pod ready: false, restart count 0 May 4 16:14:11.751: INFO: nginx-proxy-node2 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.751: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:14:11.751: INFO: pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524 started at 2021-05-04 16:11:17 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.751: INFO: Container env-test ready: false, restart count 0 May 4 16:14:11.751: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.751: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 4 16:14:11.751: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.751: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:14:11.751: INFO: cmk-2fmbx started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:14:11.751: INFO: Container nodereport ready: true, restart count 0 May 4 16:14:11.751: INFO: Container reconcile ready: true, restart count 0 May 4 16:14:11.751: INFO: downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30 started at 2021-05-04 16:10:01 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.751: INFO: Container dapi-container ready: false, restart count 0 May 4 16:14:11.751: INFO: test-rolling-update-controller-9v9w8 started at 2021-05-04 16:11:51 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.751: INFO: Container httpd ready: false, restart count 0 May 4 16:14:11.751: INFO: kube-flannel-lnwkk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:14:11.751: INFO: Init container install-cni ready: true, restart count 2 May 4 16:14:11.751: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:14:11.751: INFO: cmk-webhook-6c9d5f8578-fr595 started at 2021-05-04 14:55:15 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.751: INFO: Container cmk-webhook ready: true, restart count 0 May 4 16:14:11.751: INFO: ss2-1 started at 2021-05-04 16:08:40 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.751: INFO: Container webserver ready: true, restart count 0 May 4 16:14:11.751: INFO: kube-proxy-rfjjf started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.751: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:14:11.751: INFO: node-feature-discovery-worker-jzjqs started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.751: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:14:11.751: INFO: node-exporter-5lghf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:14:11.751: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:14:11.751: INFO: Container node-exporter ready: true, restart count 0 May 4 16:14:11.751: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x started at 2021-05-04 14:59:02 +0000 UTC (0+2 container statuses recorded) May 4 16:14:11.751: INFO: Container tas-controller ready: true, restart count 0 May 4 16:14:11.751: INFO: Container tas-extender ready: true, restart count 0 May 4 16:14:11.751: INFO: liveness-exec started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:14:11.751: INFO: Container liveness-exec ready: false, restart count 6 W0504 16:14:11.764674 34 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:14:11.798: INFO: Latency metrics for node node2 May 4 16:14:11.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1094" for this suite. • Failure [302.914 seconds] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via environment variable [NodeConformance] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:14:11.456: Unexpected error: <*errors.errorString | 0xc003c5c610>: { s: "expected pod \"pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40\" success: Gave up after waiting 5m0s for pod \"pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40\" to be \"Succeeded or Failed\"", } expected pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40" success: Gave up after waiting 5m0s for pod "pod-configmaps-7a7bc7e1-233e-4d45-8c84-a59566b64b40" to be "Succeeded or Failed" occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725 ------------------------------ {"msg":"FAILED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":34,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:09:46.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 4 16:09:46.367: INFO: Waiting up to 5m0s for pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b" in namespace "downward-api-504" to be "Succeeded or Failed" May 4 16:09:46.371: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.18135ms May 4 16:09:48.374: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007584102s May 4 16:09:50.379: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011892532s May 4 16:09:52.381: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014699145s May 4 16:09:54.384: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017395662s May 4 16:09:56.387: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.020723505s May 4 16:09:58.392: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.025699572s May 4 16:10:00.395: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.028467411s May 4 16:10:02.399: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.03189345s May 4 16:10:04.402: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.035396995s May 4 16:10:06.406: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 20.039257707s May 4 16:10:08.410: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 22.042820336s May 4 16:10:10.413: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 24.045934113s May 4 16:10:12.416: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 26.049579893s May 4 16:10:14.421: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 28.053923153s May 4 16:10:16.424: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 30.056782613s May 4 16:10:18.426: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 32.05972024s May 4 16:10:20.431: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 34.063892762s May 4 16:10:22.434: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 36.067328209s May 4 16:10:24.437: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 38.069822308s May 4 16:10:26.440: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 40.073052319s May 4 16:10:28.444: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 42.077013174s May 4 16:10:30.447: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 44.07981169s May 4 16:10:32.450: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 46.083229679s May 4 16:10:34.455: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 48.088026897s May 4 16:10:36.459: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 50.091810005s May 4 16:10:38.463: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 52.09605466s May 4 16:10:40.467: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 54.100422398s May 4 16:10:42.470: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 56.10346875s May 4 16:10:44.474: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 58.107030557s May 4 16:10:46.476: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.109614901s May 4 16:10:48.480: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.113764488s May 4 16:10:50.484: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.117430777s May 4 16:10:52.487: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.120664257s May 4 16:10:54.491: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.12471642s May 4 16:10:56.494: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.127555223s May 4 16:10:58.498: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.13093685s May 4 16:11:00.502: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.135730989s May 4 16:11:02.506: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.139083856s May 4 16:11:04.508: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.141686337s May 4 16:11:06.512: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.144923717s May 4 16:11:08.516: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.149070616s May 4 16:11:10.520: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.152875657s May 4 16:11:12.522: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.155580671s May 4 16:11:14.525: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.158536641s May 4 16:11:16.528: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.161283988s May 4 16:11:18.532: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.165185891s May 4 16:11:20.534: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.167564599s May 4 16:11:22.538: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.170885172s May 4 16:11:24.541: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.174531255s May 4 16:11:26.545: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.178079874s May 4 16:11:28.549: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.182080642s May 4 16:11:30.553: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.18647874s May 4 16:11:32.557: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.190285324s May 4 16:11:34.561: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.194482646s May 4 16:11:36.564: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.197624485s May 4 16:11:38.567: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.199954892s May 4 16:11:40.571: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.2039118s May 4 16:11:42.573: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.206603845s May 4 16:11:44.576: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.209128079s May 4 16:11:46.579: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.212673337s May 4 16:11:48.582: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.215180986s May 4 16:11:50.585: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.217812986s May 4 16:11:52.588: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.221009975s May 4 16:11:54.592: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.225294351s May 4 16:11:56.596: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.229160788s May 4 16:11:58.600: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.232805209s May 4 16:12:00.604: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.236852013s May 4 16:12:02.607: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.239872524s May 4 16:12:04.610: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.243165495s May 4 16:12:06.613: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.246582683s May 4 16:12:08.619: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.252624602s May 4 16:12:10.624: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.257592183s May 4 16:12:12.628: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.261484847s May 4 16:12:14.632: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.265259174s May 4 16:12:16.635: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.268469782s May 4 16:12:18.639: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.272463944s May 4 16:12:20.643: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.27637459s May 4 16:12:22.647: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.280661355s May 4 16:12:24.652: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.284912026s May 4 16:12:26.656: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.288967961s May 4 16:12:28.661: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.29427158s May 4 16:12:30.666: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.298796956s May 4 16:12:32.669: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.302656514s May 4 16:12:34.675: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.308585688s May 4 16:12:36.679: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.312767265s May 4 16:12:38.685: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.318686522s May 4 16:12:40.691: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.32437333s May 4 16:12:42.696: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.329734171s May 4 16:12:44.704: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.337300793s May 4 16:12:46.708: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.341197705s May 4 16:12:48.714: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.347594468s May 4 16:12:50.718: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.35094666s May 4 16:12:52.721: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.354142887s May 4 16:12:54.724: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.357695624s May 4 16:12:56.730: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.362886636s May 4 16:12:58.735: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.368504785s May 4 16:13:00.740: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.372891254s May 4 16:13:02.744: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.3769674s May 4 16:13:04.748: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.381741654s May 4 16:13:06.753: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.386644274s May 4 16:13:08.758: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.391561137s May 4 16:13:10.763: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.39597225s May 4 16:13:12.766: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.398830787s May 4 16:13:14.769: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.401909156s May 4 16:13:16.772: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.405354377s May 4 16:13:18.776: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.409265266s May 4 16:13:20.781: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.413789242s May 4 16:13:22.784: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.416983625s May 4 16:13:24.787: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.420267829s May 4 16:13:26.790: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.423646808s May 4 16:13:28.795: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.428717693s May 4 16:13:30.801: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.434384384s May 4 16:13:32.805: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.437814555s May 4 16:13:34.809: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.442352442s May 4 16:13:36.812: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.445656425s May 4 16:13:38.816: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.449032398s May 4 16:13:40.820: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.453045404s May 4 16:13:42.822: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.455537086s May 4 16:13:44.826: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.458987118s May 4 16:13:46.830: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.462856078s May 4 16:13:48.833: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.466603796s May 4 16:13:50.839: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.471857663s May 4 16:13:52.842: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.475135055s May 4 16:13:54.846: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.479045214s May 4 16:13:56.851: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.483809979s May 4 16:13:58.855: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.488693435s May 4 16:14:00.859: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.492620211s May 4 16:14:02.862: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.495425124s May 4 16:14:04.866: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.499629966s May 4 16:14:06.870: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.503066378s May 4 16:14:08.874: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.507647063s May 4 16:14:10.877: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.510585498s May 4 16:14:12.880: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.513048565s May 4 16:14:14.884: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.517294026s May 4 16:14:16.889: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.522697421s May 4 16:14:18.892: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.525551476s May 4 16:14:20.895: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.528386713s May 4 16:14:22.898: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.531418637s May 4 16:14:24.902: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.535617294s May 4 16:14:26.907: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.540502949s May 4 16:14:28.912: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.545593954s May 4 16:14:30.916: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.548827566s May 4 16:14:32.918: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.551459012s May 4 16:14:34.923: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.556140557s May 4 16:14:36.926: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.559661924s May 4 16:14:38.932: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.564967035s May 4 16:14:40.935: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.568078576s May 4 16:14:42.937: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.57065482s May 4 16:14:44.940: INFO: Pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.573678897s May 4 16:14:46.951: INFO: Failed to get logs from node "node1" pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b" container "dapi-container": the server rejected our request for an unknown reason (get pods downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b) STEP: delete the pod May 4 16:14:46.957: INFO: Waiting for pod downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b to disappear May 4 16:14:46.959: INFO: Pod downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b still exists May 4 16:14:48.959: INFO: Waiting for pod downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b to disappear May 4 16:14:48.962: INFO: Pod downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b no longer exists May 4 16:14:48.963: FAIL: Unexpected error: <*errors.errorString | 0xc0042bea70>: { s: "expected pod \"downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b\" success: Gave up after waiting 5m0s for pod \"downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b\" to be \"Succeeded or Failed\"", } expected pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b" success: Gave up after waiting 5m0s for pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b" to be "Succeeded or Failed" occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc00140ab00, 0x4c29f00, 0x15, 0xc002af4000, 0x0, 0xc0018371c8, 0x1, 0x1, 0x4de7490) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725 +0x1ee k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutputRegexp(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:532 k8s.io/kubernetes/test/e2e/common.testDownwardAPIUsingPod(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:425 k8s.io/kubernetes/test/e2e/common.testDownwardAPI(0xc00140ab00, 0xc004e3a500, 0x31, 0xc005490000, 0x1, 0x1, 0xc0018371c8, 0x1, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:391 +0x75c k8s.io/kubernetes/test/e2e/common.glob..func5.6() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:283 +0x199 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001568300) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc001568300) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc001568300, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "downward-api-504". STEP: Found 7 events. May 4 16:14:48.967: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b: { } Scheduled: Successfully assigned downward-api-504/downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b to node1 May 4 16:14:48.967: INFO: At 2021-05-04 16:09:47 +0000 UTC - event for downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b: {multus } AddedInterface: Add eth0 [10.244.4.137/24] May 4 16:14:48.967: INFO: At 2021-05-04 16:09:47 +0000 UTC - event for downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 4 16:14:48.967: INFO: At 2021-05-04 16:09:48 +0000 UTC - event for downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 4 16:14:48.967: INFO: At 2021-05-04 16:09:48 +0000 UTC - event for downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b: {kubelet node1} Failed: Error: ErrImagePull May 4 16:14:48.967: INFO: At 2021-05-04 16:09:49 +0000 UTC - event for downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 4 16:14:48.967: INFO: At 2021-05-04 16:09:49 +0000 UTC - event for downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b: {kubelet node1} Failed: Error: ImagePullBackOff May 4 16:14:48.969: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:14:48.969: INFO: May 4 16:14:48.973: INFO: Logging node info for node master1 May 4 16:14:48.976: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 db982204-549e-4532-90a7-a4410878cfc9 35990 0 2021-05-04 14:43:01 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"3e:f0:43:cb:66:52"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-04 14:51:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:46 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:47 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:47 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:47 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:14:47 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:88a0771919594d4187f6704fc7592bf8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8e0a253b-2aa4-4467-879e-567e7ba1ffa4,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:14:48.977: INFO: Logging kubelet events for node master1 May 4 16:14:48.979: INFO: Logging pods the kubelet thinks is on node master1 May 4 16:14:49.002: INFO: kube-scheduler-master1 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.002: INFO: Container kube-scheduler ready: true, restart count 0 May 4 16:14:49.002: INFO: kube-apiserver-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.002: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:14:49.002: INFO: kube-controller-manager-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.002: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:14:49.002: INFO: kube-proxy-8j6ch started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.002: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:14:49.002: INFO: docker-registry-docker-registry-56cbc7bc58-zhf8t started at 2021-05-04 14:48:42 +0000 UTC (0+2 container statuses recorded) May 4 16:14:49.002: INFO: Container docker-registry ready: true, restart count 0 May 4 16:14:49.002: INFO: Container nginx ready: true, restart count 0 May 4 16:14:49.002: INFO: node-exporter-jckjs started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:14:49.002: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:14:49.002: INFO: Container node-exporter ready: true, restart count 0 May 4 16:14:49.002: INFO: kube-flannel-qspzk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:14:49.002: INFO: Init container install-cni ready: true, restart count 0 May 4 16:14:49.002: INFO: Container kube-flannel ready: true, restart count 3 May 4 16:14:49.002: INFO: kube-multus-ds-amd64-jflvf started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.002: INFO: Container kube-multus ready: true, restart count 1 May 4 16:14:49.002: INFO: coredns-7677f9bb54-qvcd2 started at 2021-05-04 14:46:11 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.002: INFO: Container coredns ready: true, restart count 1 May 4 16:14:49.002: INFO: node-feature-discovery-controller-5bf5c49849-72rn6 started at 2021-05-04 14:51:52 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.002: INFO: Container nfd-controller ready: true, restart count 0 W0504 16:14:49.015483 30 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:14:49.039: INFO: Latency metrics for node master1 May 4 16:14:49.039: INFO: Logging node info for node master2 May 4 16:14:49.042: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 e2c15170-247b-4e7b-b818-abc807948bf8 35989 0 2021-05-04 14:43:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:e0:10:a0:e0:62"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:32 +0000 UTC,LastTransitionTime:2021-05-04 14:47:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:47 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:47 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:47 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:14:47 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6af568f56589422a9bd68e0270ce0f8c,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:bf27bb77-fad2-4b52-85c3-acb5113fc512,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:14:49.042: INFO: Logging kubelet events for node master2 May 4 16:14:49.044: INFO: Logging pods the kubelet thinks is on node master2 May 4 16:14:49.053: INFO: kube-multus-ds-amd64-dw8tg started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.053: INFO: Container kube-multus ready: true, restart count 1 May 4 16:14:49.053: INFO: dns-autoscaler-5b7b5c9b6f-zbrsq started at 2021-05-04 14:46:08 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.053: INFO: Container autoscaler ready: true, restart count 1 May 4 16:14:49.053: INFO: node-exporter-9c6qf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:14:49.053: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:14:49.053: INFO: Container node-exporter ready: true, restart count 0 May 4 16:14:49.053: INFO: kube-apiserver-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.053: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:14:49.053: INFO: kube-controller-manager-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.053: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:14:49.053: INFO: kube-scheduler-master2 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.053: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:14:49.053: INFO: kube-proxy-6b5t8 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.053: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:14:49.053: INFO: kube-flannel-cxdfr started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:14:49.053: INFO: Init container install-cni ready: true, restart count 0 May 4 16:14:49.053: INFO: Container kube-flannel ready: true, restart count 1 W0504 16:14:49.066506 30 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:14:49.089: INFO: Latency metrics for node master2 May 4 16:14:49.089: INFO: Logging node info for node master3 May 4 16:14:49.091: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 b533a646-667e-403c-944b-71dec9cc4851 35988 0 2021-05-04 14:43:51 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:b0:53:14:f6:c9"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:48:50 +0000 UTC,LastTransitionTime:2021-05-04 14:48:50 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:47 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:47 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:47 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:14:47 +0000 UTC,LastTransitionTime:2021-05-04 14:45:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:959373dcda56494486f0c2bb0bb496cc,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:0714ca81-c21e-40d6-a288-48d597238e54,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:14:49.091: INFO: Logging kubelet events for node master3 May 4 16:14:49.094: INFO: Logging pods the kubelet thinks is on node master3 May 4 16:14:49.100: INFO: kube-apiserver-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.100: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:14:49.100: INFO: kube-controller-manager-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.100: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:14:49.100: INFO: kube-scheduler-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.100: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:14:49.100: INFO: kube-proxy-2p5b6 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.100: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:14:49.100: INFO: kube-flannel-wznt8 started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:14:49.100: INFO: Init container install-cni ready: true, restart count 0 May 4 16:14:49.100: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:14:49.100: INFO: kube-multus-ds-amd64-cgwz2 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.100: INFO: Container kube-multus ready: true, restart count 1 May 4 16:14:49.100: INFO: coredns-7677f9bb54-pshfb started at 2021-05-04 14:46:06 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.101: INFO: Container coredns ready: true, restart count 1 May 4 16:14:49.101: INFO: node-exporter-wvppn started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:14:49.101: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:14:49.101: INFO: Container node-exporter ready: true, restart count 0 W0504 16:14:49.112835 30 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:14:49.135: INFO: Latency metrics for node master3 May 4 16:14:49.135: INFO: Logging node info for node node1 May 4 16:14:49.139: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 b8cf7e16-d5c7-4e2c-996a-93d93bd4fa1c 35959 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"e2:50:df:03:d2:13"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:54:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:39 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:39 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:39 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:14:39 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bffc023a4ab84df0b0181bc7b8f509e2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:dc08af72-abca-4f1d-bd0f-0e8d8eb97de5,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[@ :],SizeBytes:1002569035,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:371dc6bf7e0c7ce112a29341b000c40d840aef1dbb4fdcb3ae5c0597e28f3061 golang:alpine3.12],SizeBytes:301097267,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:14:49.139: INFO: Logging kubelet events for node node1 May 4 16:14:49.142: INFO: Logging pods the kubelet thinks is on node node1 May 4 16:14:49.157: INFO: prometheus-operator-5bb8cb9d8f-rrrhf started at 2021-05-04 14:56:03 +0000 UTC (0+2 container statuses recorded) May 4 16:14:49.157: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:14:49.157: INFO: Container prometheus-operator ready: true, restart count 0 May 4 16:14:49.157: INFO: node-exporter-k8qd9 started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:14:49.157: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:14:49.157: INFO: Container node-exporter ready: true, restart count 0 May 4 16:14:49.157: INFO: kube-flannel-d6pbl started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:14:49.157: INFO: Init container install-cni ready: true, restart count 2 May 4 16:14:49.157: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:14:49.157: INFO: node-feature-discovery-worker-wfgl5 started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.157: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:14:49.157: INFO: collectd-4755t started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:14:49.157: INFO: Container collectd ready: true, restart count 0 May 4 16:14:49.157: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:14:49.157: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:14:49.157: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.157: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:14:49.157: INFO: busybox-scheduling-4b838859-f880-4cc3-9f50-deaf16217eda started at 2021-05-04 16:11:20 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.157: INFO: Container busybox-scheduling-4b838859-f880-4cc3-9f50-deaf16217eda ready: false, restart count 0 May 4 16:14:49.157: INFO: liveness-http started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.157: INFO: Container liveness-http ready: true, restart count 16 May 4 16:14:49.157: INFO: server-envvars-e2e8d4b8-6525-4f40-9a98-8cccf5c227b4 started at 2021-05-04 16:10:40 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.157: INFO: Container srv ready: true, restart count 0 May 4 16:14:49.157: INFO: client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49 started at 2021-05-04 16:10:44 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.157: INFO: Container env3cont ready: false, restart count 0 May 4 16:14:49.157: INFO: nginx-proxy-node1 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.157: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:14:49.157: INFO: kube-proxy-t2mbn started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.157: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:14:49.157: INFO: cmk-init-discover-node1-m8vvw started at 2021-05-04 14:54:32 +0000 UTC (0+3 container statuses recorded) May 4 16:14:49.157: INFO: Container discover ready: false, restart count 0 May 4 16:14:49.157: INFO: Container init ready: false, restart count 0 May 4 16:14:49.157: INFO: Container install ready: false, restart count 0 May 4 16:14:49.157: INFO: kube-multus-ds-amd64-pkmbz started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.157: INFO: Container kube-multus ready: true, restart count 1 May 4 16:14:49.157: INFO: ss2-0 started at 2021-05-04 16:09:26 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.157: INFO: Container webserver ready: false, restart count 0 May 4 16:14:49.157: INFO: prometheus-k8s-0 started at 2021-05-04 14:56:12 +0000 UTC (0+5 container statuses recorded) May 4 16:14:49.157: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:14:49.157: INFO: Container grafana ready: true, restart count 0 May 4 16:14:49.157: INFO: Container prometheus ready: true, restart count 1 May 4 16:14:49.157: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:14:49.157: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 4 16:14:49.157: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.157: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 16:14:49.157: INFO: cmk-slg76 started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:14:49.157: INFO: Container nodereport ready: true, restart count 0 May 4 16:14:49.157: INFO: Container reconcile ready: true, restart count 0 W0504 16:14:49.170641 30 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:14:49.306: INFO: Latency metrics for node node1 May 4 16:14:49.306: INFO: Logging node info for node node2 May 4 16:14:49.310: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 41567fa7-bb24-4381-9387-e4115195037d 35958 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:98:f5:3b:98:5c"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:39 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:39 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:39 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:14:39 +0000 UTC,LastTransitionTime:2021-05-04 14:45:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d47a1c7ab17f44f2ae7ff788700a8d74,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:df3523a1-a74f-4f8b-beb1-29f5ed8699f3,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:14:49.311: INFO: Logging kubelet events for node node2 May 4 16:14:49.313: INFO: Logging pods the kubelet thinks is on node node2 May 4 16:14:49.327: INFO: nginx-proxy-node2 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.327: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:14:49.327: INFO: pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524 started at 2021-05-04 16:11:17 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.327: INFO: Container env-test ready: false, restart count 0 May 4 16:14:49.327: INFO: pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59 started at 2021-05-04 16:14:12 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.327: INFO: Container env-test ready: false, restart count 0 May 4 16:14:49.327: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.327: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 4 16:14:49.327: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.327: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:14:49.327: INFO: cmk-2fmbx started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:14:49.327: INFO: Container nodereport ready: true, restart count 0 May 4 16:14:49.327: INFO: Container reconcile ready: true, restart count 0 May 4 16:14:49.327: INFO: downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30 started at 2021-05-04 16:10:01 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.327: INFO: Container dapi-container ready: false, restart count 0 May 4 16:14:49.327: INFO: test-rolling-update-controller-9v9w8 started at 2021-05-04 16:11:51 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.327: INFO: Container httpd ready: false, restart count 0 May 4 16:14:49.327: INFO: ss2-1 started at 2021-05-04 16:08:40 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.327: INFO: Container webserver ready: true, restart count 0 May 4 16:14:49.327: INFO: kube-flannel-lnwkk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:14:49.327: INFO: Init container install-cni ready: true, restart count 2 May 4 16:14:49.327: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:14:49.327: INFO: cmk-webhook-6c9d5f8578-fr595 started at 2021-05-04 14:55:15 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.327: INFO: Container cmk-webhook ready: true, restart count 0 May 4 16:14:49.327: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x started at 2021-05-04 14:59:02 +0000 UTC (0+2 container statuses recorded) May 4 16:14:49.327: INFO: Container tas-controller ready: true, restart count 0 May 4 16:14:49.327: INFO: Container tas-extender ready: true, restart count 0 May 4 16:14:49.327: INFO: liveness-exec started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.327: INFO: Container liveness-exec ready: false, restart count 6 May 4 16:14:49.327: INFO: kube-proxy-rfjjf started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.327: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:14:49.327: INFO: node-feature-discovery-worker-jzjqs started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.327: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:14:49.327: INFO: node-exporter-5lghf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:14:49.327: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:14:49.327: INFO: Container node-exporter ready: true, restart count 0 May 4 16:14:49.327: INFO: kube-multus-ds-amd64-7r2s4 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.327: INFO: Container kube-multus ready: true, restart count 1 May 4 16:14:49.327: INFO: var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c started at 2021-05-04 16:11:37 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.327: INFO: Container dapi-container ready: false, restart count 0 May 4 16:14:49.327: INFO: cmk-init-discover-node2-zlxzj started at 2021-05-04 14:54:52 +0000 UTC (0+3 container statuses recorded) May 4 16:14:49.327: INFO: Container discover ready: false, restart count 0 May 4 16:14:49.327: INFO: Container init ready: false, restart count 0 May 4 16:14:49.327: INFO: Container install ready: false, restart count 0 May 4 16:14:49.327: INFO: collectd-dhwfp started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:14:49.327: INFO: Container collectd ready: true, restart count 0 May 4 16:14:49.327: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:14:49.327: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:14:49.327: INFO: e2e-test-httpd-pod started at 2021-05-04 16:11:06 +0000 UTC (0+1 container statuses recorded) May 4 16:14:49.327: INFO: Container e2e-test-httpd-pod ready: false, restart count 0 W0504 16:14:49.341244 30 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:14:49.403: INFO: Latency metrics for node node2 May 4 16:14:49.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-504" for this suite. • Failure [303.087 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod UID as env vars [NodeConformance] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:14:48.963: Unexpected error: <*errors.errorString | 0xc0042bea70>: { s: "expected pod \"downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b\" success: Gave up after waiting 5m0s for pod \"downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b\" to be \"Succeeded or Failed\"", } expected pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b" success: Gave up after waiting 5m0s for pod "downward-api-966b7e3b-1f50-4043-9eef-d10f60813c3b" to be "Succeeded or Failed" occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725 ------------------------------ {"msg":"FAILED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":403,"failed":1,"failures":["[sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]"]} S ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":307,"failed":0} [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:10:01.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 4 16:10:01.577: INFO: Waiting up to 5m0s for pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30" in namespace "downward-api-6154" to be "Succeeded or Failed" May 4 16:10:01.579: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140361ms May 4 16:10:03.582: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005015564s May 4 16:10:05.586: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008640276s May 4 16:10:07.589: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011538465s May 4 16:10:09.592: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 8.014290662s May 4 16:10:11.594: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 10.017183573s May 4 16:10:13.598: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 12.020288618s May 4 16:10:15.601: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 14.023594514s May 4 16:10:17.605: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 16.027771025s May 4 16:10:19.608: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 18.031116334s May 4 16:10:21.612: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 20.03514912s May 4 16:10:23.616: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 22.038529823s May 4 16:10:25.619: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 24.041442003s May 4 16:10:27.622: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 26.045185685s May 4 16:10:29.626: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 28.048293886s May 4 16:10:31.629: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 30.051446906s May 4 16:10:33.632: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 32.05517162s May 4 16:10:35.636: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 34.058824853s May 4 16:10:37.640: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 36.062582066s May 4 16:10:39.643: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 38.066174604s May 4 16:10:41.647: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 40.069427526s May 4 16:10:43.650: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 42.073200989s May 4 16:10:45.654: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 44.076327934s May 4 16:10:47.657: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 46.079513607s May 4 16:10:49.661: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 48.083592297s May 4 16:10:51.664: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 50.087078835s May 4 16:10:53.668: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 52.090706716s May 4 16:10:55.672: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 54.094401509s May 4 16:10:57.675: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 56.097371154s May 4 16:10:59.678: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 58.100329105s May 4 16:11:01.681: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.104212075s May 4 16:11:03.685: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.1080754s May 4 16:11:05.688: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.111028595s May 4 16:11:07.692: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.114372747s May 4 16:11:09.695: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.11818729s May 4 16:11:11.700: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.122282886s May 4 16:11:13.703: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.12532775s May 4 16:11:15.706: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.128889843s May 4 16:11:17.710: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.132820056s May 4 16:11:19.715: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.137596687s May 4 16:11:21.718: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.140927823s May 4 16:11:23.722: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.144434997s May 4 16:11:25.725: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.147858987s May 4 16:11:27.728: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.150677939s May 4 16:11:29.731: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.15368806s May 4 16:11:31.734: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.157043062s May 4 16:11:33.737: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.160156957s May 4 16:11:35.742: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.164812024s May 4 16:11:37.745: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.167636994s May 4 16:11:39.751: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.17379649s May 4 16:11:41.755: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.177442567s May 4 16:11:43.762: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.184290815s May 4 16:11:45.765: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.187470265s May 4 16:11:47.769: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.191469341s May 4 16:11:49.773: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.196033299s May 4 16:11:51.778: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.201021199s May 4 16:11:53.782: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.204317988s May 4 16:11:55.784: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.207037397s May 4 16:11:57.788: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.21098656s May 4 16:11:59.794: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.216437135s May 4 16:12:01.798: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.220536919s May 4 16:12:03.803: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.225387786s May 4 16:12:05.806: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.228494542s May 4 16:12:07.809: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.231745518s May 4 16:12:09.813: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.235843394s May 4 16:12:11.817: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.240126484s May 4 16:12:13.821: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.243379916s May 4 16:12:15.825: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.247503815s May 4 16:12:17.829: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.251869087s May 4 16:12:19.832: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.255235908s May 4 16:12:21.835: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.258163616s May 4 16:12:23.839: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.261675673s May 4 16:12:25.843: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.265832805s May 4 16:12:27.846: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.269110951s May 4 16:12:29.850: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.272401856s May 4 16:12:31.853: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.276196231s May 4 16:12:33.857: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.279648867s May 4 16:12:35.860: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.282589383s May 4 16:12:37.864: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.286571699s May 4 16:12:39.868: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.290412768s May 4 16:12:41.872: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.294502725s May 4 16:12:43.875: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.297978406s May 4 16:12:45.878: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.30041029s May 4 16:12:47.882: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.304650315s May 4 16:12:49.886: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.309054098s May 4 16:12:51.890: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.312460495s May 4 16:12:53.893: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.315465588s May 4 16:12:55.897: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.319269226s May 4 16:12:57.901: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.32342628s May 4 16:12:59.905: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.327413585s May 4 16:13:01.908: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.331125897s May 4 16:13:03.913: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.335323091s May 4 16:13:05.915: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.338207007s May 4 16:13:07.919: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.341463802s May 4 16:13:09.923: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.34605007s May 4 16:13:11.927: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.349674603s May 4 16:13:13.931: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.353615173s May 4 16:13:15.934: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.35724365s May 4 16:13:17.939: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.361570712s May 4 16:13:19.942: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.364922778s May 4 16:13:21.946: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.368505174s May 4 16:13:23.950: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.373011444s May 4 16:13:25.954: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.376351019s May 4 16:13:27.957: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.379484055s May 4 16:13:29.960: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.382918987s May 4 16:13:31.965: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.387569305s May 4 16:13:33.968: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.390714189s May 4 16:13:35.970: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.393237114s May 4 16:13:37.974: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.396506821s May 4 16:13:39.978: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.400327603s May 4 16:13:41.982: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.40497297s May 4 16:13:43.986: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.408732643s May 4 16:13:45.989: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.411924169s May 4 16:13:47.992: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.415140202s May 4 16:13:49.997: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.419277254s May 4 16:13:52.000: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.423095715s May 4 16:13:54.004: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.426753459s May 4 16:13:56.007: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.429721805s May 4 16:13:58.011: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.433447992s May 4 16:14:00.015: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.437304027s May 4 16:14:02.019: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.441499146s May 4 16:14:04.025: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.447720371s May 4 16:14:06.028: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.450930136s May 4 16:14:08.033: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.455429113s May 4 16:14:10.037: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.459813395s May 4 16:14:12.041: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.463588718s May 4 16:14:14.044: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.467185553s May 4 16:14:16.048: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.470643865s May 4 16:14:18.052: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.474636179s May 4 16:14:20.056: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.478296282s May 4 16:14:22.060: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.482516s May 4 16:14:24.063: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.486143765s May 4 16:14:26.067: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.489445568s May 4 16:14:28.070: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.492909498s May 4 16:14:30.073: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.496019842s May 4 16:14:32.077: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.499710618s May 4 16:14:34.079: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.502136234s May 4 16:14:36.083: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.505728619s May 4 16:14:38.086: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.508623991s May 4 16:14:40.090: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.51260833s May 4 16:14:42.093: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.516239808s May 4 16:14:44.097: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.520081919s May 4 16:14:46.101: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.523913932s May 4 16:14:48.104: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.526765903s May 4 16:14:50.108: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.530342997s May 4 16:14:52.112: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.53454264s May 4 16:14:54.116: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.53858283s May 4 16:14:56.120: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.542490042s May 4 16:14:58.123: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.545336403s May 4 16:15:00.127: INFO: Pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.549258643s May 4 16:15:02.134: INFO: Failed to get logs from node "node2" pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30" container "dapi-container": the server rejected our request for an unknown reason (get pods downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30) STEP: delete the pod May 4 16:15:02.141: INFO: Waiting for pod downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30 to disappear May 4 16:15:02.143: INFO: Pod downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30 still exists May 4 16:15:04.145: INFO: Waiting for pod downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30 to disappear May 4 16:15:04.148: INFO: Pod downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30 still exists May 4 16:15:06.144: INFO: Waiting for pod downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30 to disappear May 4 16:15:06.147: INFO: Pod downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30 no longer exists May 4 16:15:06.147: FAIL: Unexpected error: <*errors.errorString | 0xc0065e3cc0>: { s: "expected pod \"downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30\" success: Gave up after waiting 5m0s for pod \"downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30\" to be \"Succeeded or Failed\"", } expected pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30" success: Gave up after waiting 5m0s for pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30" to be "Succeeded or Failed" occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc00004f760, 0x4c29f00, 0x15, 0xc000eeb000, 0x0, 0xc0025d51a8, 0x1, 0x1, 0x4de7490) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725 +0x1ee k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutputRegexp(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:532 k8s.io/kubernetes/test/e2e/common.testDownwardAPIUsingPod(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:425 k8s.io/kubernetes/test/e2e/common.testDownwardAPI(0xc00004f760, 0xc002148c80, 0x31, 0xc0046223f0, 0x1, 0x1, 0xc0025d51a8, 0x1, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:391 +0x75c k8s.io/kubernetes/test/e2e/common.glob..func5.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:106 +0x21d k8s.io/kubernetes/test/e2e.RunE2ETests(0xc003576d80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc003576d80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc003576d80, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "downward-api-6154". STEP: Found 10 events. May 4 16:15:06.153: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30: { } Scheduled: Successfully assigned downward-api-6154/downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30 to node2 May 4 16:15:06.153: INFO: At 2021-05-04 16:10:03 +0000 UTC - event for downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30: {multus } AddedInterface: Add eth0 [10.244.3.177/24] May 4 16:15:06.153: INFO: At 2021-05-04 16:10:03 +0000 UTC - event for downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30: {kubelet node2} Pulling: Pulling image "docker.io/library/busybox:1.29" May 4 16:15:06.153: INFO: At 2021-05-04 16:10:04 +0000 UTC - event for downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30: {kubelet node2} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 4 16:15:06.153: INFO: At 2021-05-04 16:10:04 +0000 UTC - event for downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30: {kubelet node2} Failed: Error: ErrImagePull May 4 16:15:06.153: INFO: At 2021-05-04 16:10:05 +0000 UTC - event for downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30: {kubelet node2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 4 16:15:06.153: INFO: At 2021-05-04 16:10:08 +0000 UTC - event for downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30: {multus } AddedInterface: Add eth0 [10.244.3.180/24] May 4 16:15:06.153: INFO: At 2021-05-04 16:10:08 +0000 UTC - event for downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30: {kubelet node2} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 4 16:15:06.153: INFO: At 2021-05-04 16:10:08 +0000 UTC - event for downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30: {kubelet node2} Failed: Error: ImagePullBackOff May 4 16:15:06.153: INFO: At 2021-05-04 16:10:14 +0000 UTC - event for downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30: {multus } AddedInterface: Add eth0 [10.244.3.184/24] May 4 16:15:06.154: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:15:06.154: INFO: May 4 16:15:06.160: INFO: Logging node info for node master1 May 4 16:15:06.163: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 db982204-549e-4532-90a7-a4410878cfc9 36060 0 2021-05-04 14:43:01 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"3e:f0:43:cb:66:52"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-04 14:51:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:46 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:57 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:57 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:57 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:14:57 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:88a0771919594d4187f6704fc7592bf8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8e0a253b-2aa4-4467-879e-567e7ba1ffa4,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:15:06.164: INFO: Logging kubelet events for node master1 May 4 16:15:06.167: INFO: Logging pods the kubelet thinks is on node master1 May 4 16:15:06.188: INFO: coredns-7677f9bb54-qvcd2 started at 2021-05-04 14:46:11 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.188: INFO: Container coredns ready: true, restart count 1 May 4 16:15:06.188: INFO: node-feature-discovery-controller-5bf5c49849-72rn6 started at 2021-05-04 14:51:52 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.188: INFO: Container nfd-controller ready: true, restart count 0 May 4 16:15:06.188: INFO: kube-flannel-qspzk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:15:06.188: INFO: Init container install-cni ready: true, restart count 0 May 4 16:15:06.188: INFO: Container kube-flannel ready: true, restart count 3 May 4 16:15:06.188: INFO: kube-multus-ds-amd64-jflvf started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.188: INFO: Container kube-multus ready: true, restart count 1 May 4 16:15:06.188: INFO: docker-registry-docker-registry-56cbc7bc58-zhf8t started at 2021-05-04 14:48:42 +0000 UTC (0+2 container statuses recorded) May 4 16:15:06.188: INFO: Container docker-registry ready: true, restart count 0 May 4 16:15:06.188: INFO: Container nginx ready: true, restart count 0 May 4 16:15:06.188: INFO: node-exporter-jckjs started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:15:06.188: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:15:06.188: INFO: Container node-exporter ready: true, restart count 0 May 4 16:15:06.188: INFO: kube-scheduler-master1 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.188: INFO: Container kube-scheduler ready: true, restart count 0 May 4 16:15:06.188: INFO: kube-apiserver-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.188: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:15:06.188: INFO: kube-controller-manager-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.188: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:15:06.188: INFO: kube-proxy-8j6ch started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.188: INFO: Container kube-proxy ready: true, restart count 1 W0504 16:15:06.201788 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:15:06.226: INFO: Latency metrics for node master1 May 4 16:15:06.226: INFO: Logging node info for node master2 May 4 16:15:06.229: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 e2c15170-247b-4e7b-b818-abc807948bf8 36057 0 2021-05-04 14:43:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:e0:10:a0:e0:62"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:32 +0000 UTC,LastTransitionTime:2021-05-04 14:47:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:57 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:57 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:57 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:14:57 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6af568f56589422a9bd68e0270ce0f8c,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:bf27bb77-fad2-4b52-85c3-acb5113fc512,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:15:06.229: INFO: Logging kubelet events for node master2 May 4 16:15:06.232: INFO: Logging pods the kubelet thinks is on node master2 May 4 16:15:06.241: INFO: node-exporter-9c6qf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:15:06.242: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:15:06.242: INFO: Container node-exporter ready: true, restart count 0 May 4 16:15:06.242: INFO: kube-apiserver-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.242: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:15:06.242: INFO: kube-controller-manager-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.242: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:15:06.242: INFO: kube-scheduler-master2 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.242: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:15:06.242: INFO: kube-proxy-6b5t8 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.242: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:15:06.242: INFO: kube-flannel-cxdfr started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:15:06.242: INFO: Init container install-cni ready: true, restart count 0 May 4 16:15:06.242: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:15:06.242: INFO: kube-multus-ds-amd64-dw8tg started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.242: INFO: Container kube-multus ready: true, restart count 1 May 4 16:15:06.242: INFO: dns-autoscaler-5b7b5c9b6f-zbrsq started at 2021-05-04 14:46:08 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.242: INFO: Container autoscaler ready: true, restart count 1 W0504 16:15:06.254675 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:15:06.280: INFO: Latency metrics for node master2 May 4 16:15:06.280: INFO: Logging node info for node master3 May 4 16:15:06.283: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 b533a646-667e-403c-944b-71dec9cc4851 36056 0 2021-05-04 14:43:51 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:b0:53:14:f6:c9"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:48:50 +0000 UTC,LastTransitionTime:2021-05-04 14:48:50 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:57 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:57 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:57 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:14:57 +0000 UTC,LastTransitionTime:2021-05-04 14:45:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:959373dcda56494486f0c2bb0bb496cc,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:0714ca81-c21e-40d6-a288-48d597238e54,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:15:06.283: INFO: Logging kubelet events for node master3 May 4 16:15:06.286: INFO: Logging pods the kubelet thinks is on node master3 May 4 16:15:06.294: INFO: kube-scheduler-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.294: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:15:06.294: INFO: kube-proxy-2p5b6 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.294: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:15:06.294: INFO: kube-flannel-wznt8 started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:15:06.294: INFO: Init container install-cni ready: true, restart count 0 May 4 16:15:06.294: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:15:06.294: INFO: kube-multus-ds-amd64-cgwz2 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.294: INFO: Container kube-multus ready: true, restart count 1 May 4 16:15:06.294: INFO: coredns-7677f9bb54-pshfb started at 2021-05-04 14:46:06 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.294: INFO: Container coredns ready: true, restart count 1 May 4 16:15:06.294: INFO: node-exporter-wvppn started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:15:06.294: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:15:06.294: INFO: Container node-exporter ready: true, restart count 0 May 4 16:15:06.294: INFO: kube-apiserver-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.294: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:15:06.294: INFO: kube-controller-manager-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.294: INFO: Container kube-controller-manager ready: true, restart count 2 W0504 16:15:06.308175 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:15:06.335: INFO: Latency metrics for node master3 May 4 16:15:06.335: INFO: Logging node info for node node1 May 4 16:15:06.338: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 b8cf7e16-d5c7-4e2c-996a-93d93bd4fa1c 36068 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"e2:50:df:03:d2:13"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:54:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:59 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:59 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:59 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:14:59 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bffc023a4ab84df0b0181bc7b8f509e2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:dc08af72-abca-4f1d-bd0f-0e8d8eb97de5,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[@ :],SizeBytes:1002569035,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:371dc6bf7e0c7ce112a29341b000c40d840aef1dbb4fdcb3ae5c0597e28f3061 golang:alpine3.12],SizeBytes:301097267,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:15:06.339: INFO: Logging kubelet events for node node1 May 4 16:15:06.340: INFO: Logging pods the kubelet thinks is on node node1 May 4 16:15:06.360: INFO: kube-proxy-t2mbn started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.360: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:15:06.360: INFO: liveness-http started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.360: INFO: Container liveness-http ready: true, restart count 17 May 4 16:15:06.360: INFO: server-envvars-e2e8d4b8-6525-4f40-9a98-8cccf5c227b4 started at 2021-05-04 16:10:40 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.360: INFO: Container srv ready: true, restart count 0 May 4 16:15:06.360: INFO: client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49 started at 2021-05-04 16:10:44 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.360: INFO: Container env3cont ready: false, restart count 0 May 4 16:15:06.360: INFO: nginx-proxy-node1 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.360: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:15:06.360: INFO: cmk-init-discover-node1-m8vvw started at 2021-05-04 14:54:32 +0000 UTC (0+3 container statuses recorded) May 4 16:15:06.360: INFO: Container discover ready: false, restart count 0 May 4 16:15:06.360: INFO: Container init ready: false, restart count 0 May 4 16:15:06.360: INFO: Container install ready: false, restart count 0 May 4 16:15:06.360: INFO: ss2-0 started at 2021-05-04 16:09:26 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.360: INFO: Container webserver ready: false, restart count 0 May 4 16:15:06.360: INFO: kube-multus-ds-amd64-pkmbz started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.360: INFO: Container kube-multus ready: true, restart count 1 May 4 16:15:06.360: INFO: cmk-slg76 started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:15:06.360: INFO: Container nodereport ready: true, restart count 0 May 4 16:15:06.360: INFO: Container reconcile ready: true, restart count 0 May 4 16:15:06.360: INFO: prometheus-k8s-0 started at 2021-05-04 14:56:12 +0000 UTC (0+5 container statuses recorded) May 4 16:15:06.360: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:15:06.360: INFO: Container grafana ready: true, restart count 0 May 4 16:15:06.360: INFO: Container prometheus ready: true, restart count 1 May 4 16:15:06.360: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:15:06.360: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 4 16:15:06.360: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.360: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 16:15:06.360: INFO: node-feature-discovery-worker-wfgl5 started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.360: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:15:06.360: INFO: prometheus-operator-5bb8cb9d8f-rrrhf started at 2021-05-04 14:56:03 +0000 UTC (0+2 container statuses recorded) May 4 16:15:06.360: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:15:06.360: INFO: Container prometheus-operator ready: true, restart count 0 May 4 16:15:06.360: INFO: node-exporter-k8qd9 started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:15:06.360: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:15:06.360: INFO: Container node-exporter ready: true, restart count 0 May 4 16:15:06.360: INFO: kube-flannel-d6pbl started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:15:06.361: INFO: Init container install-cni ready: true, restart count 2 May 4 16:15:06.361: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:15:06.361: INFO: collectd-4755t started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:15:06.361: INFO: Container collectd ready: true, restart count 0 May 4 16:15:06.361: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:15:06.361: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:15:06.361: INFO: busybox-scheduling-4b838859-f880-4cc3-9f50-deaf16217eda started at 2021-05-04 16:11:20 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.361: INFO: Container busybox-scheduling-4b838859-f880-4cc3-9f50-deaf16217eda ready: false, restart count 0 May 4 16:15:06.361: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.361: INFO: Container kube-sriovdp ready: true, restart count 0 W0504 16:15:06.373545 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:15:06.420: INFO: Latency metrics for node node1 May 4 16:15:06.420: INFO: Logging node info for node node2 May 4 16:15:06.422: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 41567fa7-bb24-4381-9387-e4115195037d 36066 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:98:f5:3b:98:5c"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:59 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:59 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:14:59 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:14:59 +0000 UTC,LastTransitionTime:2021-05-04 14:45:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d47a1c7ab17f44f2ae7ff788700a8d74,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:df3523a1-a74f-4f8b-beb1-29f5ed8699f3,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:15:06.423: INFO: Logging kubelet events for node node2 May 4 16:15:06.425: INFO: Logging pods the kubelet thinks is on node node2 May 4 16:15:06.438: INFO: cmk-webhook-6c9d5f8578-fr595 started at 2021-05-04 14:55:15 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.438: INFO: Container cmk-webhook ready: true, restart count 0 May 4 16:15:06.438: INFO: ss2-1 started at 2021-05-04 16:08:40 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.438: INFO: Container webserver ready: true, restart count 0 May 4 16:15:06.438: INFO: kube-flannel-lnwkk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:15:06.438: INFO: Init container install-cni ready: true, restart count 2 May 4 16:15:06.438: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:15:06.438: INFO: node-feature-discovery-worker-jzjqs started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.438: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:15:06.438: INFO: node-exporter-5lghf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:15:06.438: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:15:06.438: INFO: Container node-exporter ready: true, restart count 0 May 4 16:15:06.438: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x started at 2021-05-04 14:59:02 +0000 UTC (0+2 container statuses recorded) May 4 16:15:06.438: INFO: Container tas-controller ready: true, restart count 0 May 4 16:15:06.438: INFO: Container tas-extender ready: true, restart count 0 May 4 16:15:06.438: INFO: liveness-exec started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.438: INFO: Container liveness-exec ready: false, restart count 6 May 4 16:15:06.438: INFO: kube-proxy-rfjjf started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.438: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:15:06.438: INFO: kube-multus-ds-amd64-7r2s4 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.438: INFO: Container kube-multus ready: true, restart count 1 May 4 16:15:06.438: INFO: collectd-dhwfp started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:15:06.438: INFO: Container collectd ready: true, restart count 0 May 4 16:15:06.438: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:15:06.438: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:15:06.438: INFO: var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c started at 2021-05-04 16:11:37 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.438: INFO: Container dapi-container ready: false, restart count 0 May 4 16:15:06.438: INFO: cmk-init-discover-node2-zlxzj started at 2021-05-04 14:54:52 +0000 UTC (0+3 container statuses recorded) May 4 16:15:06.438: INFO: Container discover ready: false, restart count 0 May 4 16:15:06.438: INFO: Container init ready: false, restart count 0 May 4 16:15:06.438: INFO: Container install ready: false, restart count 0 May 4 16:15:06.438: INFO: e2e-test-httpd-pod started at 2021-05-04 16:11:06 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.438: INFO: Container e2e-test-httpd-pod ready: false, restart count 0 May 4 16:15:06.438: INFO: pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524 started at 2021-05-04 16:11:17 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.438: INFO: Container env-test ready: false, restart count 0 May 4 16:15:06.438: INFO: pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59 started at 2021-05-04 16:14:12 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.439: INFO: Container env-test ready: false, restart count 0 May 4 16:15:06.439: INFO: nginx-proxy-node2 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.439: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:15:06.439: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.439: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:15:06.439: INFO: cmk-2fmbx started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:15:06.439: INFO: Container nodereport ready: true, restart count 0 May 4 16:15:06.439: INFO: Container reconcile ready: true, restart count 0 May 4 16:15:06.439: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.439: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 4 16:15:06.439: INFO: test-rolling-update-controller-9v9w8 started at 2021-05-04 16:11:51 +0000 UTC (0+1 container statuses recorded) May 4 16:15:06.439: INFO: Container httpd ready: false, restart count 0 W0504 16:15:06.452199 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:15:06.499: INFO: Latency metrics for node node2 May 4 16:15:06.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6154" for this suite. • Failure [304.962 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide host IP as an env var [NodeConformance] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:15:06.147: Unexpected error: <*errors.errorString | 0xc0065e3cc0>: { s: "expected pod \"downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30\" success: Gave up after waiting 5m0s for pod \"downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30\" to be \"Succeeded or Failed\"", } expected pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30" success: Gave up after waiting 5m0s for pod "downward-api-a4c6ece0-39f5-42bc-9b38-0fbf55dabc30" to be "Succeeded or Failed" occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725 ------------------------------ {"msg":"FAILED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":307,"failed":1,"failures":["[sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:14:49.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:14:49.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 4 16:14:55.006: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-05-04T16:14:54Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-05-04T16:14:54Z]] name:name1 resourceVersion:36045 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:306106d8-3c0d-49a2-bf2d-4e27c6baa03b] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 4 16:15:05.011: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-05-04T16:15:05Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-05-04T16:15:05Z]] name:name2 resourceVersion:36089 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:ec463cf6-e14b-4a96-a6fe-8aab44eec807] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 4 16:15:15.019: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-05-04T16:14:54Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-05-04T16:15:15Z]] name:name1 resourceVersion:36224 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:306106d8-3c0d-49a2-bf2d-4e27c6baa03b] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 4 16:15:25.024: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-05-04T16:15:05Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-05-04T16:15:25Z]] name:name2 resourceVersion:36259 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:ec463cf6-e14b-4a96-a6fe-8aab44eec807] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 4 16:15:35.031: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-05-04T16:14:54Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-05-04T16:15:15Z]] name:name1 resourceVersion:36295 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:306106d8-3c0d-49a2-bf2d-4e27c6baa03b] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 4 16:15:45.039: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-05-04T16:15:05Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-05-04T16:15:25Z]] name:name2 resourceVersion:36335 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:ec463cf6-e14b-4a96-a6fe-8aab44eec807] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:15:55.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-9090" for this suite. • [SLOW TEST:66.136 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":26,"skipped":404,"failed":1,"failures":["[sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:11:16.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-7404/secret-test-b3bddc5c-6a69-4a75-8919-5b577b47ffc2 STEP: Creating a pod to test consume secrets May 4 16:11:17.029: INFO: Waiting up to 5m0s for pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524" in namespace "secrets-7404" to be "Succeeded or Failed" May 4 16:11:17.031: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065323ms May 4 16:11:19.037: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007105396s May 4 16:11:21.040: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010128393s May 4 16:11:23.043: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013875948s May 4 16:11:25.046: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017007865s May 4 16:11:27.051: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 10.021816331s May 4 16:11:29.054: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 12.024881043s May 4 16:11:31.059: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 14.029242631s May 4 16:11:33.062: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 16.032394855s May 4 16:11:35.071: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 18.041093415s May 4 16:11:37.073: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 20.043518875s May 4 16:11:39.077: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 22.047278257s May 4 16:11:41.080: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 24.050380358s May 4 16:11:43.083: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 26.05349929s May 4 16:11:45.088: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 28.058301151s May 4 16:11:47.093: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 30.06339466s May 4 16:11:49.098: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 32.068960705s May 4 16:11:51.101: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 34.071483347s May 4 16:11:53.105: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 36.075840446s May 4 16:11:55.112: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 38.082235749s May 4 16:11:57.117: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 40.087282308s May 4 16:11:59.120: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 42.090709368s May 4 16:12:01.123: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 44.093304311s May 4 16:12:03.129: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 46.09950718s May 4 16:12:05.134: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 48.105076615s May 4 16:12:07.139: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 50.109949765s May 4 16:12:09.147: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 52.117444142s May 4 16:12:11.151: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 54.121107602s May 4 16:12:13.155: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 56.125457351s May 4 16:12:15.164: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 58.13436196s May 4 16:12:17.168: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.138967117s May 4 16:12:19.174: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.144747266s May 4 16:12:21.177: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.14773861s May 4 16:12:23.182: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.152273465s May 4 16:12:25.186: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.156783043s May 4 16:12:27.191: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.161358028s May 4 16:12:29.197: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.167747622s May 4 16:12:31.202: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.172727026s May 4 16:12:33.205: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.175788909s May 4 16:12:35.209: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.179377428s May 4 16:12:37.216: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.186175216s May 4 16:12:39.220: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.19084983s May 4 16:12:41.224: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.194144401s May 4 16:12:43.228: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.198169516s May 4 16:12:45.231: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.202054928s May 4 16:12:47.234: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.20507328s May 4 16:12:49.238: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.20903062s May 4 16:12:51.241: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.21184731s May 4 16:12:53.245: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.215866335s May 4 16:12:55.248: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.218829013s May 4 16:12:57.252: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.222193946s May 4 16:12:59.256: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.226427445s May 4 16:13:01.259: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.229274741s May 4 16:13:03.263: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.233293216s May 4 16:13:05.266: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.236923976s May 4 16:13:07.270: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.240121132s May 4 16:13:09.273: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.243869962s May 4 16:13:11.276: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.246590372s May 4 16:13:13.279: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.249971097s May 4 16:13:15.285: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.255378138s May 4 16:13:17.288: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.258739508s May 4 16:13:19.291: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.261492375s May 4 16:13:21.294: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.264503304s May 4 16:13:23.298: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.268196121s May 4 16:13:25.302: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.272485564s May 4 16:13:27.306: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.277023124s May 4 16:13:29.310: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.280799461s May 4 16:13:31.314: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.28420401s May 4 16:13:33.318: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.288451883s May 4 16:13:35.321: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.291993278s May 4 16:13:37.325: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.295707416s May 4 16:13:39.328: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.298772186s May 4 16:13:41.332: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.302143218s May 4 16:13:43.335: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.305682473s May 4 16:13:45.338: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.308914329s May 4 16:13:47.342: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.312875407s May 4 16:13:49.346: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.316280872s May 4 16:13:51.350: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.320785362s May 4 16:13:53.354: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.324770604s May 4 16:13:55.359: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.32957337s May 4 16:13:57.362: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.332681377s May 4 16:13:59.366: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.336404307s May 4 16:14:01.369: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.340068135s May 4 16:14:03.374: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.34490991s May 4 16:14:05.378: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.348589256s May 4 16:14:07.382: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.352297887s May 4 16:14:09.386: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.356241871s May 4 16:14:11.389: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.359338472s May 4 16:14:13.393: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.363416974s May 4 16:14:15.397: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.367813116s May 4 16:14:17.400: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.371062737s May 4 16:14:19.407: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.3779322s May 4 16:14:21.411: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.381425611s May 4 16:14:23.415: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.385978981s May 4 16:14:25.420: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.390976287s May 4 16:14:27.423: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.394009185s May 4 16:14:29.427: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.39756943s May 4 16:14:31.430: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.400701545s May 4 16:14:33.434: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.404442954s May 4 16:14:35.438: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.408336234s May 4 16:14:37.441: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.411326074s May 4 16:14:39.447: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.417239865s May 4 16:14:41.450: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.420244423s May 4 16:14:43.455: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.425835137s May 4 16:14:45.459: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.430069832s May 4 16:14:47.463: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.433468231s May 4 16:14:49.468: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.438987794s May 4 16:14:51.472: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.442941115s May 4 16:14:53.476: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.44690092s May 4 16:14:55.480: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.450700421s May 4 16:14:57.483: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.453778466s May 4 16:14:59.487: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.457168166s May 4 16:15:01.490: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.460165752s May 4 16:15:03.496: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.466703964s May 4 16:15:05.500: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.470879667s May 4 16:15:07.503: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.473763253s May 4 16:15:09.507: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.477337677s May 4 16:15:11.510: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.480794828s May 4 16:15:13.514: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.484963496s May 4 16:15:15.521: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.491242369s May 4 16:15:17.524: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.494170238s May 4 16:15:19.529: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.499430226s May 4 16:15:21.533: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.503953393s May 4 16:15:23.539: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.509389462s May 4 16:15:25.543: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.513101347s May 4 16:15:27.546: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.516079962s May 4 16:15:29.548: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.518999128s May 4 16:15:31.552: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.52276821s May 4 16:15:33.555: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.525901102s May 4 16:15:35.559: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.529653266s May 4 16:15:37.562: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.532960335s May 4 16:15:39.567: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.537383825s May 4 16:15:41.571: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.54204524s May 4 16:15:43.576: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.547063029s May 4 16:15:45.580: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.550236576s May 4 16:15:47.584: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.554718452s May 4 16:15:49.588: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.558396713s May 4 16:15:51.591: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.561901042s May 4 16:15:53.594: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.564485512s May 4 16:15:55.596: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.566464534s May 4 16:15:57.599: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.56920908s May 4 16:15:59.602: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.572253842s May 4 16:16:01.606: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.576740378s May 4 16:16:03.611: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.581519528s May 4 16:16:05.614: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.585060191s May 4 16:16:07.618: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.588991832s May 4 16:16:09.624: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.594356975s May 4 16:16:11.627: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.598012315s May 4 16:16:13.631: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.601942785s May 4 16:16:15.636: INFO: Pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.606252134s May 4 16:16:17.644: INFO: Failed to get logs from node "node2" pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524" container "env-test": the server rejected our request for an unknown reason (get pods pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524) STEP: delete the pod May 4 16:16:17.650: INFO: Waiting for pod pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524 to disappear May 4 16:16:17.653: INFO: Pod pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524 still exists May 4 16:16:19.653: INFO: Waiting for pod pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524 to disappear May 4 16:16:19.655: INFO: Pod pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524 no longer exists May 4 16:16:19.656: FAIL: Unexpected error: <*errors.errorString | 0xc003805180>: { s: "expected pod \"pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524\" success: Gave up after waiting 5m0s for pod \"pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524\" to be \"Succeeded or Failed\"", } expected pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524" success: Gave up after waiting 5m0s for pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524" to be "Succeeded or Failed" occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc000cae840, 0x4c097bc, 0xf, 0xc00141c000, 0x0, 0xc0040ab178, 0x6, 0x6, 0x4de7488) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725 +0x1ee k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:525 k8s.io/kubernetes/test/e2e/common.glob..func27.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:127 +0xa14 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002965080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc002965080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc002965080, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "secrets-7404". STEP: Found 7 events. May 4 16:16:19.660: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524: { } Scheduled: Successfully assigned secrets-7404/pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524 to node2 May 4 16:16:19.660: INFO: At 2021-05-04 16:11:18 +0000 UTC - event for pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524: {multus } AddedInterface: Add eth0 [10.244.3.194/24] May 4 16:16:19.661: INFO: At 2021-05-04 16:11:18 +0000 UTC - event for pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524: {kubelet node2} Pulling: Pulling image "docker.io/library/busybox:1.29" May 4 16:16:19.661: INFO: At 2021-05-04 16:11:19 +0000 UTC - event for pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524: {kubelet node2} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 4 16:16:19.661: INFO: At 2021-05-04 16:11:19 +0000 UTC - event for pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524: {kubelet node2} Failed: Error: ErrImagePull May 4 16:16:19.661: INFO: At 2021-05-04 16:11:19 +0000 UTC - event for pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524: {kubelet node2} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 4 16:16:19.661: INFO: At 2021-05-04 16:11:19 +0000 UTC - event for pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524: {kubelet node2} Failed: Error: ImagePullBackOff May 4 16:16:19.662: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:16:19.662: INFO: May 4 16:16:19.666: INFO: Logging node info for node master1 May 4 16:16:19.669: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 db982204-549e-4532-90a7-a4410878cfc9 36530 0 2021-05-04 14:43:01 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"3e:f0:43:cb:66:52"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-04 14:51:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:46 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:18 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:18 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:18 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:16:18 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:88a0771919594d4187f6704fc7592bf8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8e0a253b-2aa4-4467-879e-567e7ba1ffa4,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:16:19.670: INFO: Logging kubelet events for node master1 May 4 16:16:19.673: INFO: Logging pods the kubelet thinks is on node master1 May 4 16:16:19.683: INFO: kube-flannel-qspzk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:16:19.683: INFO: Init container install-cni ready: true, restart count 0 May 4 16:16:19.683: INFO: Container kube-flannel ready: true, restart count 3 May 4 16:16:19.683: INFO: kube-multus-ds-amd64-jflvf started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.683: INFO: Container kube-multus ready: true, restart count 1 May 4 16:16:19.683: INFO: coredns-7677f9bb54-qvcd2 started at 2021-05-04 14:46:11 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.683: INFO: Container coredns ready: true, restart count 1 May 4 16:16:19.683: INFO: node-feature-discovery-controller-5bf5c49849-72rn6 started at 2021-05-04 14:51:52 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.683: INFO: Container nfd-controller ready: true, restart count 0 May 4 16:16:19.683: INFO: kube-scheduler-master1 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.683: INFO: Container kube-scheduler ready: true, restart count 0 May 4 16:16:19.683: INFO: kube-apiserver-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.683: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:16:19.683: INFO: kube-controller-manager-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.683: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:16:19.683: INFO: kube-proxy-8j6ch started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.683: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:16:19.683: INFO: docker-registry-docker-registry-56cbc7bc58-zhf8t started at 2021-05-04 14:48:42 +0000 UTC (0+2 container statuses recorded) May 4 16:16:19.683: INFO: Container docker-registry ready: true, restart count 0 May 4 16:16:19.683: INFO: Container nginx ready: true, restart count 0 May 4 16:16:19.683: INFO: node-exporter-jckjs started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:16:19.683: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:16:19.683: INFO: Container node-exporter ready: true, restart count 0 W0504 16:16:19.696893 21 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:16:19.723: INFO: Latency metrics for node master1 May 4 16:16:19.723: INFO: Logging node info for node master2 May 4 16:16:19.725: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 e2c15170-247b-4e7b-b818-abc807948bf8 36529 0 2021-05-04 14:43:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:e0:10:a0:e0:62"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:32 +0000 UTC,LastTransitionTime:2021-05-04 14:47:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:17 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:17 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:17 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:16:17 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6af568f56589422a9bd68e0270ce0f8c,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:bf27bb77-fad2-4b52-85c3-acb5113fc512,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:16:19.725: INFO: Logging kubelet events for node master2 May 4 16:16:19.727: INFO: Logging pods the kubelet thinks is on node master2 May 4 16:16:19.734: INFO: kube-multus-ds-amd64-dw8tg started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.734: INFO: Container kube-multus ready: true, restart count 1 May 4 16:16:19.734: INFO: dns-autoscaler-5b7b5c9b6f-zbrsq started at 2021-05-04 14:46:08 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.734: INFO: Container autoscaler ready: true, restart count 1 May 4 16:16:19.734: INFO: node-exporter-9c6qf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:16:19.734: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:16:19.734: INFO: Container node-exporter ready: true, restart count 0 May 4 16:16:19.734: INFO: kube-apiserver-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.734: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:16:19.734: INFO: kube-controller-manager-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.734: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:16:19.734: INFO: kube-scheduler-master2 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.734: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:16:19.734: INFO: kube-proxy-6b5t8 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.734: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:16:19.734: INFO: kube-flannel-cxdfr started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:16:19.734: INFO: Init container install-cni ready: true, restart count 0 May 4 16:16:19.734: INFO: Container kube-flannel ready: true, restart count 1 W0504 16:16:19.745830 21 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:16:19.770: INFO: Latency metrics for node master2 May 4 16:16:19.770: INFO: Logging node info for node master3 May 4 16:16:19.773: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 b533a646-667e-403c-944b-71dec9cc4851 36527 0 2021-05-04 14:43:51 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:b0:53:14:f6:c9"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:48:50 +0000 UTC,LastTransitionTime:2021-05-04 14:48:50 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:17 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:17 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:17 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:16:17 +0000 UTC,LastTransitionTime:2021-05-04 14:45:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:959373dcda56494486f0c2bb0bb496cc,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:0714ca81-c21e-40d6-a288-48d597238e54,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:16:19.773: INFO: Logging kubelet events for node master3 May 4 16:16:19.776: INFO: Logging pods the kubelet thinks is on node master3 May 4 16:16:19.784: INFO: kube-multus-ds-amd64-cgwz2 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.784: INFO: Container kube-multus ready: true, restart count 1 May 4 16:16:19.784: INFO: coredns-7677f9bb54-pshfb started at 2021-05-04 14:46:06 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.784: INFO: Container coredns ready: true, restart count 1 May 4 16:16:19.784: INFO: node-exporter-wvppn started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:16:19.784: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:16:19.784: INFO: Container node-exporter ready: true, restart count 0 May 4 16:16:19.784: INFO: kube-apiserver-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.784: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:16:19.784: INFO: kube-controller-manager-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.784: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:16:19.784: INFO: kube-scheduler-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.784: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:16:19.785: INFO: kube-proxy-2p5b6 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.785: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:16:19.785: INFO: kube-flannel-wznt8 started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:16:19.785: INFO: Init container install-cni ready: true, restart count 0 May 4 16:16:19.785: INFO: Container kube-flannel ready: true, restart count 1 W0504 16:16:19.798400 21 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:16:19.821: INFO: Latency metrics for node master3 May 4 16:16:19.821: INFO: Logging node info for node node1 May 4 16:16:19.824: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 b8cf7e16-d5c7-4e2c-996a-93d93bd4fa1c 36503 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"e2:50:df:03:d2:13"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:54:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:10 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:10 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:10 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:16:10 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bffc023a4ab84df0b0181bc7b8f509e2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:dc08af72-abca-4f1d-bd0f-0e8d8eb97de5,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[@ :],SizeBytes:1002569035,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:371dc6bf7e0c7ce112a29341b000c40d840aef1dbb4fdcb3ae5c0597e28f3061 golang:alpine3.12],SizeBytes:301097267,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:16:19.825: INFO: Logging kubelet events for node node1 May 4 16:16:19.828: INFO: Logging pods the kubelet thinks is on node node1 May 4 16:16:19.843: INFO: cmk-slg76 started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:16:19.843: INFO: Container nodereport ready: true, restart count 0 May 4 16:16:19.843: INFO: Container reconcile ready: true, restart count 0 May 4 16:16:19.843: INFO: prometheus-k8s-0 started at 2021-05-04 14:56:12 +0000 UTC (0+5 container statuses recorded) May 4 16:16:19.843: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:16:19.843: INFO: Container grafana ready: true, restart count 0 May 4 16:16:19.843: INFO: Container prometheus ready: true, restart count 1 May 4 16:16:19.843: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:16:19.843: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 4 16:16:19.843: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.843: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 16:16:19.843: INFO: node-feature-discovery-worker-wfgl5 started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.843: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:16:19.843: INFO: prometheus-operator-5bb8cb9d8f-rrrhf started at 2021-05-04 14:56:03 +0000 UTC (0+2 container statuses recorded) May 4 16:16:19.843: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:16:19.843: INFO: Container prometheus-operator ready: true, restart count 0 May 4 16:16:19.843: INFO: node-exporter-k8qd9 started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:16:19.843: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:16:19.843: INFO: Container node-exporter ready: true, restart count 0 May 4 16:16:19.843: INFO: kube-flannel-d6pbl started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:16:19.843: INFO: Init container install-cni ready: true, restart count 2 May 4 16:16:19.843: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:16:19.843: INFO: collectd-4755t started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:16:19.843: INFO: Container collectd ready: true, restart count 0 May 4 16:16:19.843: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:16:19.843: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:16:19.843: INFO: busybox-scheduling-4b838859-f880-4cc3-9f50-deaf16217eda started at 2021-05-04 16:11:20 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.843: INFO: Container busybox-scheduling-4b838859-f880-4cc3-9f50-deaf16217eda ready: false, restart count 0 May 4 16:16:19.843: INFO: affinity-nodeport-transition-hn44d started at 2021-05-04 16:15:06 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.843: INFO: Container affinity-nodeport-transition ready: true, restart count 0 May 4 16:16:19.843: INFO: server started at 2021-05-04 16:15:55 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.843: INFO: Container agnhost-container ready: true, restart count 0 May 4 16:16:19.843: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.843: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:16:19.843: INFO: kube-proxy-t2mbn started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.844: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:16:19.844: INFO: liveness-http started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.844: INFO: Container liveness-http ready: false, restart count 17 May 4 16:16:19.844: INFO: server-envvars-e2e8d4b8-6525-4f40-9a98-8cccf5c227b4 started at 2021-05-04 16:10:40 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.844: INFO: Container srv ready: true, restart count 0 May 4 16:16:19.844: INFO: nginx-proxy-node1 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.844: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:16:19.844: INFO: cmk-init-discover-node1-m8vvw started at 2021-05-04 14:54:32 +0000 UTC (0+3 container statuses recorded) May 4 16:16:19.844: INFO: Container discover ready: false, restart count 0 May 4 16:16:19.844: INFO: Container init ready: false, restart count 0 May 4 16:16:19.844: INFO: Container install ready: false, restart count 0 May 4 16:16:19.844: INFO: ss2-0 started at 2021-05-04 16:09:26 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.844: INFO: Container webserver ready: false, restart count 0 May 4 16:16:19.844: INFO: kube-multus-ds-amd64-pkmbz started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.844: INFO: Container kube-multus ready: true, restart count 1 W0504 16:16:19.856246 21 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:16:19.907: INFO: Latency metrics for node node1 May 4 16:16:19.907: INFO: Logging node info for node node2 May 4 16:16:19.910: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 41567fa7-bb24-4381-9387-e4115195037d 36502 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:98:f5:3b:98:5c"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:10 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:10 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:10 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:16:10 +0000 UTC,LastTransitionTime:2021-05-04 14:45:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d47a1c7ab17f44f2ae7ff788700a8d74,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:df3523a1-a74f-4f8b-beb1-29f5ed8699f3,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:16:19.910: INFO: Logging kubelet events for node node2 May 4 16:16:19.912: INFO: Logging pods the kubelet thinks is on node node2 May 4 16:16:19.931: INFO: var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c started at 2021-05-04 16:11:37 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.931: INFO: Container dapi-container ready: false, restart count 0 May 4 16:16:19.931: INFO: cmk-init-discover-node2-zlxzj started at 2021-05-04 14:54:52 +0000 UTC (0+3 container statuses recorded) May 4 16:16:19.931: INFO: Container discover ready: false, restart count 0 May 4 16:16:19.931: INFO: Container init ready: false, restart count 0 May 4 16:16:19.931: INFO: Container install ready: false, restart count 0 May 4 16:16:19.931: INFO: collectd-dhwfp started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:16:19.931: INFO: Container collectd ready: true, restart count 0 May 4 16:16:19.931: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:16:19.931: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:16:19.931: INFO: execpod-affinityp2lx7 started at 2021-05-04 16:15:12 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.931: INFO: Container agnhost-container ready: true, restart count 0 May 4 16:16:19.931: INFO: e2e-test-httpd-pod started at 2021-05-04 16:11:06 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.931: INFO: Container e2e-test-httpd-pod ready: false, restart count 0 May 4 16:16:19.931: INFO: tester started at 2021-05-04 16:15:59 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.931: INFO: Container tester ready: false, restart count 0 May 4 16:16:19.931: INFO: nginx-proxy-node2 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.931: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:16:19.931: INFO: pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59 started at 2021-05-04 16:14:12 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.931: INFO: Container env-test ready: false, restart count 0 May 4 16:16:19.931: INFO: affinity-nodeport-transition-qr9hq started at 2021-05-04 16:15:06 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.931: INFO: Container affinity-nodeport-transition ready: true, restart count 0 May 4 16:16:19.931: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.931: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 4 16:16:19.931: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.931: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:16:19.931: INFO: cmk-2fmbx started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:16:19.931: INFO: Container nodereport ready: true, restart count 0 May 4 16:16:19.931: INFO: Container reconcile ready: true, restart count 0 May 4 16:16:19.931: INFO: test-rolling-update-controller-9v9w8 started at 2021-05-04 16:11:51 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.931: INFO: Container httpd ready: false, restart count 0 May 4 16:16:19.931: INFO: ss2-1 started at 2021-05-04 16:08:40 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.931: INFO: Container webserver ready: true, restart count 0 May 4 16:16:19.931: INFO: kube-flannel-lnwkk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:16:19.931: INFO: Init container install-cni ready: true, restart count 2 May 4 16:16:19.931: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:16:19.931: INFO: cmk-webhook-6c9d5f8578-fr595 started at 2021-05-04 14:55:15 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.931: INFO: Container cmk-webhook ready: true, restart count 0 May 4 16:16:19.931: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x started at 2021-05-04 14:59:02 +0000 UTC (0+2 container statuses recorded) May 4 16:16:19.931: INFO: Container tas-controller ready: true, restart count 0 May 4 16:16:19.931: INFO: Container tas-extender ready: true, restart count 0 May 4 16:16:19.931: INFO: affinity-nodeport-transition-kqrgt started at 2021-05-04 16:15:06 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.931: INFO: Container affinity-nodeport-transition ready: true, restart count 0 May 4 16:16:19.931: INFO: client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49 started at 2021-05-04 16:15:46 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.931: INFO: Container env3cont ready: false, restart count 0 May 4 16:16:19.931: INFO: liveness-exec started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.931: INFO: Container liveness-exec ready: false, restart count 6 May 4 16:16:19.931: INFO: kube-proxy-rfjjf started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.931: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:16:19.931: INFO: node-feature-discovery-worker-jzjqs started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.931: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:16:19.932: INFO: node-exporter-5lghf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:16:19.932: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:16:19.932: INFO: Container node-exporter ready: true, restart count 0 May 4 16:16:19.932: INFO: kube-multus-ds-amd64-7r2s4 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:16:19.932: INFO: Container kube-multus ready: true, restart count 1 W0504 16:16:19.945110 21 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:16:19.991: INFO: Latency metrics for node node2 May 4 16:16:19.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7404" for this suite. • Failure [303.004 seconds] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:36 should be consumable via the environment [NodeConformance] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:16:19.656: Unexpected error: <*errors.errorString | 0xc003805180>: { s: "expected pod \"pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524\" success: Gave up after waiting 5m0s for pod \"pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524\" to be \"Succeeded or Failed\"", } expected pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524" success: Gave up after waiting 5m0s for pod "pod-configmaps-c28995c6-2b6a-4da5-a000-12579ce4c524" to be "Succeeded or Failed" occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725 ------------------------------ {"msg":"FAILED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":294,"failed":1,"failures":["[sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:16:20.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 [It] should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching May 4 16:16:20.064: INFO: starting watch STEP: patching STEP: updating May 4 16:16:20.075: INFO: waiting for watch events with expected annotations May 4 16:16:20.075: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:16:20.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-3720" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":-1,"completed":18,"skipped":302,"failed":1,"failures":["[sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]"]} SSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:11:20.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:16:20.430: FAIL: Unexpected error: <*errors.errorString | 0xc0002c2200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*PodClient).CreateSync(0xc0050dd700, 0xc00518e000, 0x211) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:103 +0xfe k8s.io/kubernetes/test/e2e/common.glob..func12.2.1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:50 +0x1af k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002d4d800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc002d4d800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc002d4d800, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "kubelet-test-9093". STEP: Found 9 events. May 4 16:16:20.434: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for busybox-scheduling-4b838859-f880-4cc3-9f50-deaf16217eda: { } Scheduled: Successfully assigned kubelet-test-9093/busybox-scheduling-4b838859-f880-4cc3-9f50-deaf16217eda to node1 May 4 16:16:20.434: INFO: At 2021-05-04 16:11:21 +0000 UTC - event for busybox-scheduling-4b838859-f880-4cc3-9f50-deaf16217eda: {multus } AddedInterface: Add eth0 [10.244.4.148/24] May 4 16:16:20.434: INFO: At 2021-05-04 16:11:21 +0000 UTC - event for busybox-scheduling-4b838859-f880-4cc3-9f50-deaf16217eda: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 4 16:16:20.434: INFO: At 2021-05-04 16:11:22 +0000 UTC - event for busybox-scheduling-4b838859-f880-4cc3-9f50-deaf16217eda: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 4 16:16:20.434: INFO: At 2021-05-04 16:11:22 +0000 UTC - event for busybox-scheduling-4b838859-f880-4cc3-9f50-deaf16217eda: {kubelet node1} Failed: Error: ErrImagePull May 4 16:16:20.434: INFO: At 2021-05-04 16:11:23 +0000 UTC - event for busybox-scheduling-4b838859-f880-4cc3-9f50-deaf16217eda: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 4 16:16:20.434: INFO: At 2021-05-04 16:11:25 +0000 UTC - event for busybox-scheduling-4b838859-f880-4cc3-9f50-deaf16217eda: {multus } AddedInterface: Add eth0 [10.244.4.149/24] May 4 16:16:20.434: INFO: At 2021-05-04 16:11:25 +0000 UTC - event for busybox-scheduling-4b838859-f880-4cc3-9f50-deaf16217eda: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 4 16:16:20.434: INFO: At 2021-05-04 16:11:25 +0000 UTC - event for busybox-scheduling-4b838859-f880-4cc3-9f50-deaf16217eda: {kubelet node1} Failed: Error: ImagePullBackOff May 4 16:16:20.436: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:16:20.436: INFO: busybox-scheduling-4b838859-f880-4cc3-9f50-deaf16217eda node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:11:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:11:20 +0000 UTC ContainersNotReady containers with unready status: [busybox-scheduling-4b838859-f880-4cc3-9f50-deaf16217eda]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:11:20 +0000 UTC ContainersNotReady containers with unready status: [busybox-scheduling-4b838859-f880-4cc3-9f50-deaf16217eda]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:11:20 +0000 UTC }] May 4 16:16:20.436: INFO: May 4 16:16:20.440: INFO: Logging node info for node master1 May 4 16:16:20.444: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 db982204-549e-4532-90a7-a4410878cfc9 36530 0 2021-05-04 14:43:01 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"3e:f0:43:cb:66:52"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-04 14:51:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:46 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:18 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:18 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:18 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:16:18 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:88a0771919594d4187f6704fc7592bf8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8e0a253b-2aa4-4467-879e-567e7ba1ffa4,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:16:20.444: INFO: Logging kubelet events for node master1 May 4 16:16:20.447: INFO: Logging pods the kubelet thinks is on node master1 May 4 16:16:20.457: INFO: kube-flannel-qspzk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:16:20.457: INFO: Init container install-cni ready: true, restart count 0 May 4 16:16:20.457: INFO: Container kube-flannel ready: true, restart count 3 May 4 16:16:20.457: INFO: kube-multus-ds-amd64-jflvf started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.457: INFO: Container kube-multus ready: true, restart count 1 May 4 16:16:20.457: INFO: coredns-7677f9bb54-qvcd2 started at 2021-05-04 14:46:11 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.457: INFO: Container coredns ready: true, restart count 1 May 4 16:16:20.457: INFO: node-feature-discovery-controller-5bf5c49849-72rn6 started at 2021-05-04 14:51:52 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.457: INFO: Container nfd-controller ready: true, restart count 0 May 4 16:16:20.457: INFO: kube-apiserver-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.457: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:16:20.457: INFO: kube-controller-manager-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.457: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:16:20.457: INFO: kube-proxy-8j6ch started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.457: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:16:20.457: INFO: docker-registry-docker-registry-56cbc7bc58-zhf8t started at 2021-05-04 14:48:42 +0000 UTC (0+2 container statuses recorded) May 4 16:16:20.457: INFO: Container docker-registry ready: true, restart count 0 May 4 16:16:20.457: INFO: Container nginx ready: true, restart count 0 May 4 16:16:20.457: INFO: node-exporter-jckjs started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:16:20.457: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:16:20.457: INFO: Container node-exporter ready: true, restart count 0 May 4 16:16:20.457: INFO: kube-scheduler-master1 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.457: INFO: Container kube-scheduler ready: true, restart count 0 W0504 16:16:20.473657 24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:16:20.501: INFO: Latency metrics for node master1 May 4 16:16:20.501: INFO: Logging node info for node master2 May 4 16:16:20.504: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 e2c15170-247b-4e7b-b818-abc807948bf8 36529 0 2021-05-04 14:43:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:e0:10:a0:e0:62"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:32 +0000 UTC,LastTransitionTime:2021-05-04 14:47:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:17 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:17 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:17 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:16:17 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6af568f56589422a9bd68e0270ce0f8c,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:bf27bb77-fad2-4b52-85c3-acb5113fc512,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:16:20.504: INFO: Logging kubelet events for node master2 May 4 16:16:20.506: INFO: Logging pods the kubelet thinks is on node master2 May 4 16:16:20.514: INFO: node-exporter-9c6qf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:16:20.514: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:16:20.514: INFO: Container node-exporter ready: true, restart count 0 May 4 16:16:20.514: INFO: kube-apiserver-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.514: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:16:20.514: INFO: kube-controller-manager-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.514: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:16:20.514: INFO: kube-scheduler-master2 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.514: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:16:20.514: INFO: kube-proxy-6b5t8 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.514: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:16:20.514: INFO: kube-flannel-cxdfr started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:16:20.514: INFO: Init container install-cni ready: true, restart count 0 May 4 16:16:20.514: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:16:20.514: INFO: kube-multus-ds-amd64-dw8tg started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.514: INFO: Container kube-multus ready: true, restart count 1 May 4 16:16:20.514: INFO: dns-autoscaler-5b7b5c9b6f-zbrsq started at 2021-05-04 14:46:08 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.514: INFO: Container autoscaler ready: true, restart count 1 W0504 16:16:20.526678 24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:16:20.550: INFO: Latency metrics for node master2 May 4 16:16:20.550: INFO: Logging node info for node master3 May 4 16:16:20.552: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 b533a646-667e-403c-944b-71dec9cc4851 36527 0 2021-05-04 14:43:51 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:b0:53:14:f6:c9"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:48:50 +0000 UTC,LastTransitionTime:2021-05-04 14:48:50 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:17 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:17 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:17 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:16:17 +0000 UTC,LastTransitionTime:2021-05-04 14:45:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:959373dcda56494486f0c2bb0bb496cc,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:0714ca81-c21e-40d6-a288-48d597238e54,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:16:20.553: INFO: Logging kubelet events for node master3 May 4 16:16:20.554: INFO: Logging pods the kubelet thinks is on node master3 May 4 16:16:20.563: INFO: kube-proxy-2p5b6 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.563: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:16:20.563: INFO: kube-flannel-wznt8 started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:16:20.563: INFO: Init container install-cni ready: true, restart count 0 May 4 16:16:20.563: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:16:20.563: INFO: kube-multus-ds-amd64-cgwz2 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.563: INFO: Container kube-multus ready: true, restart count 1 May 4 16:16:20.563: INFO: coredns-7677f9bb54-pshfb started at 2021-05-04 14:46:06 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.563: INFO: Container coredns ready: true, restart count 1 May 4 16:16:20.563: INFO: node-exporter-wvppn started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:16:20.563: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:16:20.563: INFO: Container node-exporter ready: true, restart count 0 May 4 16:16:20.563: INFO: kube-apiserver-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.563: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:16:20.563: INFO: kube-controller-manager-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.563: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:16:20.563: INFO: kube-scheduler-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.563: INFO: Container kube-scheduler ready: true, restart count 2 W0504 16:16:20.577893 24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:16:20.600: INFO: Latency metrics for node master3 May 4 16:16:20.600: INFO: Logging node info for node node1 May 4 16:16:20.603: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 b8cf7e16-d5c7-4e2c-996a-93d93bd4fa1c 36569 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"e2:50:df:03:d2:13"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:54:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:20 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:20 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:20 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:16:20 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bffc023a4ab84df0b0181bc7b8f509e2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:dc08af72-abca-4f1d-bd0f-0e8d8eb97de5,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[@ :],SizeBytes:1002569035,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:371dc6bf7e0c7ce112a29341b000c40d840aef1dbb4fdcb3ae5c0597e28f3061 golang:alpine3.12],SizeBytes:301097267,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:16:20.603: INFO: Logging kubelet events for node node1 May 4 16:16:20.606: INFO: Logging pods the kubelet thinks is on node node1 May 4 16:16:20.620: INFO: nginx-proxy-node1 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.620: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:16:20.620: INFO: kube-proxy-t2mbn started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.620: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:16:20.620: INFO: liveness-http started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.620: INFO: Container liveness-http ready: false, restart count 17 May 4 16:16:20.620: INFO: server-envvars-e2e8d4b8-6525-4f40-9a98-8cccf5c227b4 started at 2021-05-04 16:10:40 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.620: INFO: Container srv ready: true, restart count 0 May 4 16:16:20.620: INFO: cmk-init-discover-node1-m8vvw started at 2021-05-04 14:54:32 +0000 UTC (0+3 container statuses recorded) May 4 16:16:20.620: INFO: Container discover ready: false, restart count 0 May 4 16:16:20.620: INFO: Container init ready: false, restart count 0 May 4 16:16:20.620: INFO: Container install ready: false, restart count 0 May 4 16:16:20.620: INFO: netserver-0 started at 2021-05-04 16:16:20 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.620: INFO: Container webserver ready: false, restart count 0 May 4 16:16:20.620: INFO: kube-multus-ds-amd64-pkmbz started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.620: INFO: Container kube-multus ready: true, restart count 1 May 4 16:16:20.620: INFO: ss2-0 started at 2021-05-04 16:09:26 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.620: INFO: Container webserver ready: false, restart count 0 May 4 16:16:20.620: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.620: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 16:16:20.620: INFO: cmk-slg76 started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:16:20.620: INFO: Container nodereport ready: true, restart count 0 May 4 16:16:20.620: INFO: Container reconcile ready: true, restart count 0 May 4 16:16:20.620: INFO: prometheus-k8s-0 started at 2021-05-04 14:56:12 +0000 UTC (0+5 container statuses recorded) May 4 16:16:20.620: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:16:20.620: INFO: Container grafana ready: true, restart count 0 May 4 16:16:20.620: INFO: Container prometheus ready: true, restart count 1 May 4 16:16:20.620: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:16:20.620: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 4 16:16:20.620: INFO: kube-flannel-d6pbl started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:16:20.620: INFO: Init container install-cni ready: true, restart count 2 May 4 16:16:20.620: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:16:20.620: INFO: node-feature-discovery-worker-wfgl5 started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.620: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:16:20.620: INFO: prometheus-operator-5bb8cb9d8f-rrrhf started at 2021-05-04 14:56:03 +0000 UTC (0+2 container statuses recorded) May 4 16:16:20.620: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:16:20.620: INFO: Container prometheus-operator ready: true, restart count 0 May 4 16:16:20.620: INFO: node-exporter-k8qd9 started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:16:20.620: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:16:20.620: INFO: Container node-exporter ready: true, restart count 0 May 4 16:16:20.620: INFO: collectd-4755t started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:16:20.620: INFO: Container collectd ready: true, restart count 0 May 4 16:16:20.620: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:16:20.620: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:16:20.620: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.620: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:16:20.620: INFO: busybox-scheduling-4b838859-f880-4cc3-9f50-deaf16217eda started at 2021-05-04 16:11:20 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.620: INFO: Container busybox-scheduling-4b838859-f880-4cc3-9f50-deaf16217eda ready: false, restart count 0 May 4 16:16:20.620: INFO: affinity-nodeport-transition-hn44d started at 2021-05-04 16:15:06 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.620: INFO: Container affinity-nodeport-transition ready: true, restart count 0 May 4 16:16:20.620: INFO: server started at 2021-05-04 16:15:55 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.620: INFO: Container agnhost-container ready: true, restart count 0 W0504 16:16:20.631913 24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:16:20.663: INFO: Latency metrics for node node1 May 4 16:16:20.663: INFO: Logging node info for node node2 May 4 16:16:20.665: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 41567fa7-bb24-4381-9387-e4115195037d 36555 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:98:f5:3b:98:5c"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:20 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:20 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:20 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:16:20 +0000 UTC,LastTransitionTime:2021-05-04 14:45:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d47a1c7ab17f44f2ae7ff788700a8d74,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:df3523a1-a74f-4f8b-beb1-29f5ed8699f3,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:16:20.666: INFO: Logging kubelet events for node node2 May 4 16:16:20.668: INFO: Logging pods the kubelet thinks is on node node2 May 4 16:16:20.682: INFO: kube-multus-ds-amd64-7r2s4 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.682: INFO: Container kube-multus ready: true, restart count 1 May 4 16:16:20.682: INFO: var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c started at 2021-05-04 16:11:37 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.682: INFO: Container dapi-container ready: false, restart count 0 May 4 16:16:20.682: INFO: cmk-init-discover-node2-zlxzj started at 2021-05-04 14:54:52 +0000 UTC (0+3 container statuses recorded) May 4 16:16:20.682: INFO: Container discover ready: false, restart count 0 May 4 16:16:20.682: INFO: Container init ready: false, restart count 0 May 4 16:16:20.682: INFO: Container install ready: false, restart count 0 May 4 16:16:20.682: INFO: collectd-dhwfp started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:16:20.682: INFO: Container collectd ready: true, restart count 0 May 4 16:16:20.682: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:16:20.682: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:16:20.682: INFO: execpod-affinityp2lx7 started at 2021-05-04 16:15:12 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.682: INFO: Container agnhost-container ready: true, restart count 0 May 4 16:16:20.682: INFO: e2e-test-httpd-pod started at 2021-05-04 16:11:06 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.682: INFO: Container e2e-test-httpd-pod ready: false, restart count 0 May 4 16:16:20.682: INFO: pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59 started at 2021-05-04 16:14:12 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.682: INFO: Container env-test ready: false, restart count 0 May 4 16:16:20.682: INFO: tester started at 2021-05-04 16:15:59 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.682: INFO: Container tester ready: false, restart count 0 May 4 16:16:20.682: INFO: nginx-proxy-node2 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.682: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:16:20.682: INFO: cmk-2fmbx started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:16:20.682: INFO: Container nodereport ready: true, restart count 0 May 4 16:16:20.682: INFO: Container reconcile ready: true, restart count 0 May 4 16:16:20.682: INFO: affinity-nodeport-transition-qr9hq started at 2021-05-04 16:15:06 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.682: INFO: Container affinity-nodeport-transition ready: true, restart count 0 May 4 16:16:20.682: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.682: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 4 16:16:20.682: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.682: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:16:20.682: INFO: test-rolling-update-controller-9v9w8 started at 2021-05-04 16:11:51 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.682: INFO: Container httpd ready: false, restart count 0 May 4 16:16:20.682: INFO: ss2-1 started at 2021-05-04 16:08:40 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.682: INFO: Container webserver ready: true, restart count 0 May 4 16:16:20.682: INFO: kube-flannel-lnwkk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:16:20.682: INFO: Init container install-cni ready: true, restart count 2 May 4 16:16:20.682: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:16:20.682: INFO: cmk-webhook-6c9d5f8578-fr595 started at 2021-05-04 14:55:15 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.682: INFO: Container cmk-webhook ready: true, restart count 0 May 4 16:16:20.682: INFO: node-exporter-5lghf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:16:20.682: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:16:20.682: INFO: Container node-exporter ready: true, restart count 0 May 4 16:16:20.682: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x started at 2021-05-04 14:59:02 +0000 UTC (0+2 container statuses recorded) May 4 16:16:20.683: INFO: Container tas-controller ready: true, restart count 0 May 4 16:16:20.683: INFO: Container tas-extender ready: true, restart count 0 May 4 16:16:20.683: INFO: affinity-nodeport-transition-kqrgt started at 2021-05-04 16:15:06 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.683: INFO: Container affinity-nodeport-transition ready: true, restart count 0 May 4 16:16:20.683: INFO: client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49 started at 2021-05-04 16:15:46 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.683: INFO: Container env3cont ready: false, restart count 0 May 4 16:16:20.683: INFO: liveness-exec started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.683: INFO: Container liveness-exec ready: false, restart count 6 May 4 16:16:20.683: INFO: netserver-1 started at 2021-05-04 16:16:20 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.683: INFO: Container webserver ready: false, restart count 0 May 4 16:16:20.683: INFO: kube-proxy-rfjjf started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.683: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:16:20.683: INFO: node-feature-discovery-worker-jzjqs started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:16:20.683: INFO: Container nfd-worker ready: true, restart count 0 W0504 16:16:20.695795 24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:16:21.039: INFO: Latency metrics for node node2 May 4 16:16:21.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9093" for this suite. • Failure [300.662 seconds] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a busybox command in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:41 should print the output to logs [NodeConformance] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:16:20.430: Unexpected error: <*errors.errorString | 0xc0002c2200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:103 ------------------------------ {"msg":"FAILED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":242,"failed":3,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]"]} SS ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":-1,"completed":15,"skipped":410,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:11:37.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition May 4 16:11:37.323: INFO: Waiting up to 5m0s for pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c" in namespace "var-expansion-4466" to be "Succeeded or Failed" May 4 16:11:37.328: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.829441ms May 4 16:11:39.331: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007501016s May 4 16:11:41.335: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01128742s May 4 16:11:43.338: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014301796s May 4 16:11:45.341: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017181232s May 4 16:11:47.343: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.019929856s May 4 16:11:49.346: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.022667189s May 4 16:11:51.349: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.025444545s May 4 16:11:53.352: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.028945404s May 4 16:11:55.360: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 18.036870036s May 4 16:11:57.364: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 20.040544925s May 4 16:11:59.369: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 22.045426684s May 4 16:12:01.373: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 24.049884779s May 4 16:12:03.376: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 26.052713902s May 4 16:12:05.380: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 28.056695091s May 4 16:12:07.384: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 30.060398642s May 4 16:12:09.387: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 32.064029619s May 4 16:12:11.391: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 34.067928279s May 4 16:12:13.395: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 36.071950606s May 4 16:12:15.398: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 38.074712433s May 4 16:12:17.401: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 40.077947343s May 4 16:12:19.405: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 42.081297169s May 4 16:12:21.410: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 44.086163926s May 4 16:12:23.416: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 46.09274328s May 4 16:12:25.419: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 48.096024184s May 4 16:12:27.422: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 50.098954777s May 4 16:12:29.427: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 52.103110338s May 4 16:12:31.431: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 54.107500631s May 4 16:12:33.435: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 56.111230965s May 4 16:12:35.441: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 58.117667238s May 4 16:12:37.444: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.120334972s May 4 16:12:39.450: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.126360859s May 4 16:12:41.452: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.12894724s May 4 16:12:43.455: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.132016932s May 4 16:12:45.459: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.135519817s May 4 16:12:47.462: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.138941089s May 4 16:12:49.466: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.142478995s May 4 16:12:51.472: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.148335263s May 4 16:12:53.474: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.150948986s May 4 16:12:55.477: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.153903255s May 4 16:12:57.480: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.156670894s May 4 16:12:59.483: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.159527071s May 4 16:13:01.486: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.162954455s May 4 16:13:03.492: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.168227513s May 4 16:13:05.496: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.172710077s May 4 16:13:07.500: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.176214839s May 4 16:13:09.502: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.179010069s May 4 16:13:11.506: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.182084621s May 4 16:13:13.509: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.185379234s May 4 16:13:15.513: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.18949735s May 4 16:13:17.516: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.192696243s May 4 16:13:19.524: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.200290611s May 4 16:13:21.528: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.204376458s May 4 16:13:23.532: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.208187614s May 4 16:13:25.534: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.211003969s May 4 16:13:27.537: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.213998688s May 4 16:13:29.541: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.217926728s May 4 16:13:31.547: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.223266718s May 4 16:13:33.551: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.227338994s May 4 16:13:35.553: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.230012161s May 4 16:13:37.557: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.233641626s May 4 16:13:39.562: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.238768605s May 4 16:13:41.566: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.242542084s May 4 16:13:43.569: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.245427408s May 4 16:13:45.573: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.249451301s May 4 16:13:47.577: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.25332167s May 4 16:13:49.580: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.256814459s May 4 16:13:51.586: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.262374897s May 4 16:13:53.590: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.266311074s May 4 16:13:55.596: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.272985504s May 4 16:13:57.599: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.276000694s May 4 16:13:59.603: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.279937503s May 4 16:14:01.606: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.282998341s May 4 16:14:03.610: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.2867587s May 4 16:14:05.614: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.290190553s May 4 16:14:07.617: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.29321938s May 4 16:14:09.620: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.296244812s May 4 16:14:11.622: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.298900737s May 4 16:14:13.626: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.302995147s May 4 16:14:15.630: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.306986318s May 4 16:14:17.633: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.310015235s May 4 16:14:19.638: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.314330393s May 4 16:14:21.642: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.3182959s May 4 16:14:23.647: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.323143107s May 4 16:14:25.651: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.327546962s May 4 16:14:27.654: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.330987146s May 4 16:14:29.658: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.334367674s May 4 16:14:31.662: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.338110376s May 4 16:14:33.665: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.341884855s May 4 16:14:35.672: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.34817387s May 4 16:14:37.676: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.352400869s May 4 16:14:39.679: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.355458131s May 4 16:14:41.682: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.359029168s May 4 16:14:43.690: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.366247556s May 4 16:14:45.694: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.370994512s May 4 16:14:47.699: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.375878532s May 4 16:14:49.704: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.380132916s May 4 16:14:51.707: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.383501437s May 4 16:14:53.712: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.388293884s May 4 16:14:55.720: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.396324406s May 4 16:14:57.723: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.399324723s May 4 16:14:59.727: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.403796891s May 4 16:15:01.731: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.407110278s May 4 16:15:03.735: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.411838256s May 4 16:15:05.739: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.415095225s May 4 16:15:07.742: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.418906519s May 4 16:15:09.745: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.421842522s May 4 16:15:11.749: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.425178561s May 4 16:15:13.753: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.429067315s May 4 16:15:15.755: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.431948369s May 4 16:15:17.759: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.435482204s May 4 16:15:19.762: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.439033086s May 4 16:15:21.767: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.443427426s May 4 16:15:23.770: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.446574103s May 4 16:15:25.773: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.449674224s May 4 16:15:27.777: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.45329568s May 4 16:15:29.781: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.457439311s May 4 16:15:31.783: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.460044233s May 4 16:15:33.786: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.46300427s May 4 16:15:35.790: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.466056842s May 4 16:15:37.793: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.469105517s May 4 16:15:39.796: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.472556178s May 4 16:15:41.799: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.47583169s May 4 16:15:43.802: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.478987322s May 4 16:15:45.806: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.482927311s May 4 16:15:47.810: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.486319175s May 4 16:15:49.813: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.489796021s May 4 16:15:51.816: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.49223551s May 4 16:15:53.818: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.494967916s May 4 16:15:55.822: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.498861142s May 4 16:15:57.825: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.501622892s May 4 16:15:59.828: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.504185083s May 4 16:16:01.832: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.50902551s May 4 16:16:03.835: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.511928947s May 4 16:16:05.840: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.516905749s May 4 16:16:07.844: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.520152188s May 4 16:16:09.847: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.523239633s May 4 16:16:11.850: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.52664938s May 4 16:16:13.853: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.529662016s May 4 16:16:15.856: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.532758507s May 4 16:16:17.859: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.535988486s May 4 16:16:19.862: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.538398991s May 4 16:16:21.865: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.541280771s May 4 16:16:23.869: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.545383886s May 4 16:16:25.871: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.548046313s May 4 16:16:27.875: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.551647884s May 4 16:16:29.878: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.554615498s May 4 16:16:31.882: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.558886577s May 4 16:16:33.886: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.562462197s May 4 16:16:35.890: INFO: Pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.566308979s May 4 16:16:37.898: INFO: Failed to get logs from node "node2" pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c" container "dapi-container": the server rejected our request for an unknown reason (get pods var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c) STEP: delete the pod May 4 16:16:37.903: INFO: Waiting for pod var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c to disappear May 4 16:16:37.906: INFO: Pod var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c still exists May 4 16:16:39.907: INFO: Waiting for pod var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c to disappear May 4 16:16:39.909: INFO: Pod var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c still exists May 4 16:16:41.906: INFO: Waiting for pod var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c to disappear May 4 16:16:41.909: INFO: Pod var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c still exists May 4 16:16:43.907: INFO: Waiting for pod var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c to disappear May 4 16:16:43.909: INFO: Pod var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c no longer exists May 4 16:16:43.910: FAIL: Unexpected error: <*errors.errorString | 0xc004fb75f0>: { s: "expected pod \"var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c\" success: Gave up after waiting 5m0s for pod \"var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c\" to be \"Succeeded or Failed\"", } expected pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c" success: Gave up after waiting 5m0s for pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c" to be "Succeeded or Failed" occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc000f062c0, 0x4c09bc7, 0xf, 0xc00380c400, 0x0, 0xc0046171a8, 0x3, 0x3, 0x4de7488) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725 +0x1ee k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:525 k8s.io/kubernetes/test/e2e/common.glob..func9.1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:58 +0x22a k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002947080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc002947080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc002947080, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "var-expansion-4466". STEP: Found 7 events. May 4 16:16:43.915: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c: { } Scheduled: Successfully assigned var-expansion-4466/var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c to node2 May 4 16:16:43.915: INFO: At 2021-05-04 16:11:38 +0000 UTC - event for var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c: {multus } AddedInterface: Add eth0 [10.244.3.196/24] May 4 16:16:43.915: INFO: At 2021-05-04 16:11:38 +0000 UTC - event for var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c: {kubelet node2} Pulling: Pulling image "docker.io/library/busybox:1.29" May 4 16:16:43.915: INFO: At 2021-05-04 16:11:39 +0000 UTC - event for var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c: {kubelet node2} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 4 16:16:43.915: INFO: At 2021-05-04 16:11:39 +0000 UTC - event for var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c: {kubelet node2} Failed: Error: ErrImagePull May 4 16:16:43.915: INFO: At 2021-05-04 16:11:40 +0000 UTC - event for var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c: {kubelet node2} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 4 16:16:43.915: INFO: At 2021-05-04 16:11:40 +0000 UTC - event for var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c: {kubelet node2} Failed: Error: ImagePullBackOff May 4 16:16:43.917: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:16:43.917: INFO: May 4 16:16:43.922: INFO: Logging node info for node master1 May 4 16:16:43.924: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 db982204-549e-4532-90a7-a4410878cfc9 36704 0 2021-05-04 14:43:01 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"3e:f0:43:cb:66:52"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-04 14:51:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:46 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:38 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:38 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:38 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:16:38 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:88a0771919594d4187f6704fc7592bf8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8e0a253b-2aa4-4467-879e-567e7ba1ffa4,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:16:43.925: INFO: Logging kubelet events for node master1 May 4 16:16:43.927: INFO: Logging pods the kubelet thinks is on node master1 May 4 16:16:43.936: INFO: kube-apiserver-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:16:43.936: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:16:43.936: INFO: kube-controller-manager-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:16:43.936: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:16:43.936: INFO: kube-proxy-8j6ch started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:16:43.936: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:16:43.936: INFO: docker-registry-docker-registry-56cbc7bc58-zhf8t started at 2021-05-04 14:48:42 +0000 UTC (0+2 container statuses recorded) May 4 16:16:43.936: INFO: Container docker-registry ready: true, restart count 0 May 4 16:16:43.936: INFO: Container nginx ready: true, restart count 0 May 4 16:16:43.936: INFO: node-exporter-jckjs started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:16:43.936: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:16:43.936: INFO: Container node-exporter ready: true, restart count 0 May 4 16:16:43.936: INFO: kube-scheduler-master1 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:16:43.936: INFO: Container kube-scheduler ready: true, restart count 0 May 4 16:16:43.936: INFO: kube-flannel-qspzk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:16:43.936: INFO: Init container install-cni ready: true, restart count 0 May 4 16:16:43.936: INFO: Container kube-flannel ready: true, restart count 3 May 4 16:16:43.936: INFO: kube-multus-ds-amd64-jflvf started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:16:43.936: INFO: Container kube-multus ready: true, restart count 1 May 4 16:16:43.936: INFO: coredns-7677f9bb54-qvcd2 started at 2021-05-04 14:46:11 +0000 UTC (0+1 container statuses recorded) May 4 16:16:43.936: INFO: Container coredns ready: true, restart count 1 May 4 16:16:43.936: INFO: node-feature-discovery-controller-5bf5c49849-72rn6 started at 2021-05-04 14:51:52 +0000 UTC (0+1 container statuses recorded) May 4 16:16:43.936: INFO: Container nfd-controller ready: true, restart count 0 W0504 16:16:43.949603 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:16:43.981: INFO: Latency metrics for node master1 May 4 16:16:43.981: INFO: Logging node info for node master2 May 4 16:16:43.983: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 e2c15170-247b-4e7b-b818-abc807948bf8 36702 0 2021-05-04 14:43:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:e0:10:a0:e0:62"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:32 +0000 UTC,LastTransitionTime:2021-05-04 14:47:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:37 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:37 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:37 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:16:37 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6af568f56589422a9bd68e0270ce0f8c,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:bf27bb77-fad2-4b52-85c3-acb5113fc512,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:16:43.984: INFO: Logging kubelet events for node master2 May 4 16:16:43.986: INFO: Logging pods the kubelet thinks is on node master2 May 4 16:16:43.992: INFO: kube-flannel-cxdfr started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:16:43.992: INFO: Init container install-cni ready: true, restart count 0 May 4 16:16:43.992: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:16:43.992: INFO: kube-multus-ds-amd64-dw8tg started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:16:43.992: INFO: Container kube-multus ready: true, restart count 1 May 4 16:16:43.992: INFO: dns-autoscaler-5b7b5c9b6f-zbrsq started at 2021-05-04 14:46:08 +0000 UTC (0+1 container statuses recorded) May 4 16:16:43.992: INFO: Container autoscaler ready: true, restart count 1 May 4 16:16:43.992: INFO: node-exporter-9c6qf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:16:43.992: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:16:43.992: INFO: Container node-exporter ready: true, restart count 0 May 4 16:16:43.992: INFO: kube-apiserver-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:16:43.992: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:16:43.992: INFO: kube-controller-manager-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:16:43.992: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:16:43.993: INFO: kube-scheduler-master2 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:16:43.993: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:16:43.993: INFO: kube-proxy-6b5t8 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:16:43.993: INFO: Container kube-proxy ready: true, restart count 2 W0504 16:16:44.003803 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:16:44.032: INFO: Latency metrics for node master2 May 4 16:16:44.032: INFO: Logging node info for node master3 May 4 16:16:44.036: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 b533a646-667e-403c-944b-71dec9cc4851 36701 0 2021-05-04 14:43:51 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:b0:53:14:f6:c9"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:48:50 +0000 UTC,LastTransitionTime:2021-05-04 14:48:50 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:37 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:37 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:37 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:16:37 +0000 UTC,LastTransitionTime:2021-05-04 14:45:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:959373dcda56494486f0c2bb0bb496cc,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:0714ca81-c21e-40d6-a288-48d597238e54,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:16:44.037: INFO: Logging kubelet events for node master3 May 4 16:16:44.039: INFO: Logging pods the kubelet thinks is on node master3 May 4 16:16:44.047: INFO: kube-apiserver-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:16:44.047: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:16:44.047: INFO: kube-controller-manager-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:16:44.047: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:16:44.047: INFO: kube-scheduler-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:16:44.047: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:16:44.047: INFO: kube-proxy-2p5b6 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:16:44.047: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:16:44.047: INFO: kube-flannel-wznt8 started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:16:44.047: INFO: Init container install-cni ready: true, restart count 0 May 4 16:16:44.047: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:16:44.047: INFO: kube-multus-ds-amd64-cgwz2 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:16:44.047: INFO: Container kube-multus ready: true, restart count 1 May 4 16:16:44.047: INFO: coredns-7677f9bb54-pshfb started at 2021-05-04 14:46:06 +0000 UTC (0+1 container statuses recorded) May 4 16:16:44.047: INFO: Container coredns ready: true, restart count 1 May 4 16:16:44.047: INFO: node-exporter-wvppn started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:16:44.047: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:16:44.047: INFO: Container node-exporter ready: true, restart count 0 W0504 16:16:44.060398 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:16:44.083: INFO: Latency metrics for node master3 May 4 16:16:44.083: INFO: Logging node info for node node1 May 4 16:16:44.086: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 b8cf7e16-d5c7-4e2c-996a-93d93bd4fa1c 36718 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"e2:50:df:03:d2:13"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:54:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:40 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:40 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:40 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:16:40 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bffc023a4ab84df0b0181bc7b8f509e2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:dc08af72-abca-4f1d-bd0f-0e8d8eb97de5,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[@ :],SizeBytes:1002569035,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:371dc6bf7e0c7ce112a29341b000c40d840aef1dbb4fdcb3ae5c0597e28f3061 golang:alpine3.12],SizeBytes:301097267,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:16:44.087: INFO: Logging kubelet events for node node1 May 4 16:16:44.089: INFO: Logging pods the kubelet thinks is on node node1 May 4 16:16:44.104: INFO: kube-multus-ds-amd64-pkmbz started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:16:44.104: INFO: Container kube-multus ready: true, restart count 1 May 4 16:16:44.104: INFO: ss2-0 started at 2021-05-04 16:09:26 +0000 UTC (0+1 container statuses recorded) May 4 16:16:44.104: INFO: Container webserver ready: false, restart count 0 May 4 16:16:44.104: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:16:44.104: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 16:16:44.104: INFO: cmk-slg76 started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:16:44.104: INFO: Container nodereport ready: true, restart count 0 May 4 16:16:44.104: INFO: Container reconcile ready: true, restart count 0 May 4 16:16:44.104: INFO: prometheus-k8s-0 started at 2021-05-04 14:56:12 +0000 UTC (0+5 container statuses recorded) May 4 16:16:44.104: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:16:44.104: INFO: Container grafana ready: true, restart count 0 May 4 16:16:44.104: INFO: Container prometheus ready: true, restart count 1 May 4 16:16:44.104: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:16:44.104: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 4 16:16:44.104: INFO: kube-flannel-d6pbl started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:16:44.104: INFO: Init container install-cni ready: true, restart count 2 May 4 16:16:44.104: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:16:44.104: INFO: node-feature-discovery-worker-wfgl5 started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:16:44.105: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:16:44.105: INFO: prometheus-operator-5bb8cb9d8f-rrrhf started at 2021-05-04 14:56:03 +0000 UTC (0+2 container statuses recorded) May 4 16:16:44.105: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:16:44.105: INFO: Container prometheus-operator ready: true, restart count 0 May 4 16:16:44.105: INFO: node-exporter-k8qd9 started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:16:44.105: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:16:44.105: INFO: Container node-exporter ready: true, restart count 0 May 4 16:16:44.105: INFO: collectd-4755t started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:16:44.105: INFO: Container collectd ready: true, restart count 0 May 4 16:16:44.105: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:16:44.105: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:16:44.105: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:16:44.105: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:16:44.105: INFO: affinity-nodeport-transition-hn44d started at 2021-05-04 16:15:06 +0000 UTC (0+1 container statuses recorded) May 4 16:16:44.105: INFO: Container affinity-nodeport-transition ready: true, restart count 0 May 4 16:16:44.105: INFO: server started at 2021-05-04 16:15:55 +0000 UTC (0+1 container statuses recorded) May 4 16:16:44.105: INFO: Container agnhost-container ready: true, restart count 0 May 4 16:16:44.105: INFO: nginx-proxy-node1 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:16:44.105: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:16:44.105: INFO: kube-proxy-t2mbn started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:16:44.105: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:16:44.105: INFO: liveness-http started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:16:44.105: INFO: Container liveness-http ready: false, restart count 17 May 4 16:16:44.105: INFO: server-envvars-e2e8d4b8-6525-4f40-9a98-8cccf5c227b4 started at 2021-05-04 16:10:40 +0000 UTC (0+1 container statuses recorded) May 4 16:16:44.105: INFO: Container srv ready: true, restart count 0 May 4 16:16:44.105: INFO: cmk-init-discover-node1-m8vvw started at 2021-05-04 14:54:32 +0000 UTC (0+3 container statuses recorded) May 4 16:16:44.105: INFO: Container discover ready: false, restart count 0 May 4 16:16:44.105: INFO: Container init ready: false, restart count 0 May 4 16:16:44.105: INFO: Container install ready: false, restart count 0 May 4 16:16:44.105: INFO: netserver-0 started at 2021-05-04 16:16:20 +0000 UTC (0+1 container statuses recorded) May 4 16:16:44.105: INFO: Container webserver ready: true, restart count 0 W0504 16:16:44.118056 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:16:44.149: INFO: Latency metrics for node node1 May 4 16:16:44.149: INFO: Logging node info for node node2 May 4 16:16:44.152: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 41567fa7-bb24-4381-9387-e4115195037d 36717 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:98:f5:3b:98:5c"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:40 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:40 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:40 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:16:40 +0000 UTC,LastTransitionTime:2021-05-04 14:45:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d47a1c7ab17f44f2ae7ff788700a8d74,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:df3523a1-a74f-4f8b-beb1-29f5ed8699f3,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:16:44.153: INFO: Logging kubelet events for node node2 May 4 16:16:44.155: INFO: Logging pods the kubelet thinks is on node node2 May 4 16:16:44.170: INFO: ss2-1 started at 2021-05-04 16:08:40 +0000 UTC (0+1 container statuses recorded) May 4 16:16:44.170: INFO: Container webserver ready: true, restart count 0 May 4 16:16:44.170: INFO: kube-flannel-lnwkk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:16:44.170: INFO: Init container install-cni ready: true, restart count 2 May 4 16:16:44.170: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:16:44.170: INFO: cmk-webhook-6c9d5f8578-fr595 started at 2021-05-04 14:55:15 +0000 UTC (0+1 container statuses recorded) May 4 16:16:44.170: INFO: Container cmk-webhook ready: true, restart count 0 May 4 16:16:44.170: INFO: node-exporter-5lghf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:16:44.170: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:16:44.170: INFO: Container node-exporter ready: true, restart count 0 May 4 16:16:44.170: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x started at 2021-05-04 14:59:02 +0000 UTC (0+2 container statuses recorded) May 4 16:16:44.170: INFO: Container tas-controller ready: true, restart count 0 May 4 16:16:44.170: INFO: Container tas-extender ready: true, restart count 0 May 4 16:16:44.170: INFO: affinity-nodeport-transition-kqrgt started at 2021-05-04 16:15:06 +0000 UTC (0+1 container statuses recorded) May 4 16:16:44.170: INFO: Container affinity-nodeport-transition ready: true, restart count 0 May 4 16:16:44.170: INFO: client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49 started at 2021-05-04 16:15:46 +0000 UTC (0+1 container statuses recorded) May 4 16:16:44.170: INFO: Container env3cont ready: false, restart count 0 May 4 16:16:44.170: INFO: liveness-exec started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:16:44.170: INFO: Container liveness-exec ready: false, restart count 6 May 4 16:16:44.170: INFO: netserver-1 started at 2021-05-04 16:16:20 +0000 UTC (0+1 container statuses recorded) May 4 16:16:44.170: INFO: Container webserver ready: true, restart count 0 May 4 16:16:44.170: INFO: kube-proxy-rfjjf started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:16:44.170: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:16:44.170: INFO: node-feature-discovery-worker-jzjqs started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:16:44.170: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:16:44.170: INFO: test-container-pod started at 2021-05-04 16:16:42 +0000 UTC (0+1 container statuses recorded) May 4 16:16:44.170: INFO: Container webserver ready: false, restart count 0 May 4 16:16:44.170: INFO: kube-multus-ds-amd64-7r2s4 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:16:44.170: INFO: Container kube-multus ready: true, restart count 1 May 4 16:16:44.170: INFO: pod-update-58c100c1-80db-41aa-82d8-3e236dfc5b91 started at 2021-05-04 16:16:21 +0000 UTC (0+1 container statuses recorded) May 4 16:16:44.170: INFO: Container nginx ready: false, restart count 0 May 4 16:16:44.170: INFO: cmk-init-discover-node2-zlxzj started at 2021-05-04 14:54:52 +0000 UTC (0+3 container statuses recorded) May 4 16:16:44.170: INFO: Container discover ready: false, restart count 0 May 4 16:16:44.170: INFO: Container init ready: false, restart count 0 May 4 16:16:44.170: INFO: Container install ready: false, restart count 0 May 4 16:16:44.170: INFO: collectd-dhwfp started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:16:44.170: INFO: Container collectd ready: true, restart count 0 May 4 16:16:44.170: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:16:44.170: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:16:44.170: INFO: execpod-affinityp2lx7 started at 2021-05-04 16:15:12 +0000 UTC (0+1 container statuses recorded) May 4 16:16:44.170: INFO: Container agnhost-container ready: true, restart count 0 May 4 16:16:44.170: INFO: e2e-test-httpd-pod started at 2021-05-04 16:11:06 +0000 UTC (0+1 container statuses recorded) May 4 16:16:44.170: INFO: Container e2e-test-httpd-pod ready: false, restart count 0 May 4 16:16:44.170: INFO: pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59 started at 2021-05-04 16:14:12 +0000 UTC (0+1 container statuses recorded) May 4 16:16:44.170: INFO: Container env-test ready: false, restart count 0 May 4 16:16:44.170: INFO: tester started at 2021-05-04 16:15:59 +0000 UTC (0+1 container statuses recorded) May 4 16:16:44.170: INFO: Container tester ready: false, restart count 0 May 4 16:16:44.170: INFO: nginx-proxy-node2 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:16:44.170: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:16:44.170: INFO: cmk-2fmbx started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:16:44.170: INFO: Container nodereport ready: true, restart count 0 May 4 16:16:44.170: INFO: Container reconcile ready: true, restart count 0 May 4 16:16:44.170: INFO: affinity-nodeport-transition-qr9hq started at 2021-05-04 16:15:06 +0000 UTC (0+1 container statuses recorded) May 4 16:16:44.170: INFO: Container affinity-nodeport-transition ready: true, restart count 0 May 4 16:16:44.170: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:16:44.170: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 4 16:16:44.170: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:16:44.170: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:16:44.170: INFO: test-rolling-update-controller-9v9w8 started at 2021-05-04 16:11:51 +0000 UTC (0+1 container statuses recorded) May 4 16:16:44.170: INFO: Container httpd ready: false, restart count 0 W0504 16:16:44.182795 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:16:44.214: INFO: Latency metrics for node node2 May 4 16:16:44.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4466" for this suite. • Failure [306.941 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow composing env vars into new env vars [NodeConformance] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:16:43.910: Unexpected error: <*errors.errorString | 0xc004fb75f0>: { s: "expected pod \"var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c\" success: Gave up after waiting 5m0s for pod \"var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c\" to be \"Succeeded or Failed\"", } expected pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c" success: Gave up after waiting 5m0s for pod "var-expansion-f40503a0-123d-4f48-a90a-ab9f4afa468c" to be "Succeeded or Failed" occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725 ------------------------------ {"msg":"FAILED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":410,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:16:20.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-5921 STEP: creating a selector STEP: Creating the service pods in kubernetes May 4 16:16:20.156: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 4 16:16:20.188: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 4 16:16:22.192: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 4 16:16:24.193: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 16:16:26.192: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 16:16:28.192: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 16:16:30.192: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 16:16:32.192: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 16:16:34.193: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 16:16:36.193: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 16:16:38.192: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 16:16:40.192: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 16:16:42.193: INFO: The status of Pod netserver-0 is Running (Ready = true) May 4 16:16:42.199: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 4 16:16:46.224: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.210:8080/dial?request=hostname&protocol=udp&host=10.244.4.152&port=8081&tries=1'] Namespace:pod-network-test-5921 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 4 16:16:46.224: INFO: >>> kubeConfig: /root/.kube/config May 4 16:16:46.338: INFO: Waiting for responses: map[] May 4 16:16:46.340: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.210:8080/dial?request=hostname&protocol=udp&host=10.244.3.208&port=8081&tries=1'] Namespace:pod-network-test-5921 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 4 16:16:46.340: INFO: >>> kubeConfig: /root/.kube/config May 4 16:16:46.446: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:16:46.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5921" for this suite. • [SLOW TEST:26.319 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":314,"failed":1,"failures":["[sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:11:51.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:11:51.264: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 4 16:11:51.271: INFO: Pod name sample-pod: Found 0 pods out of 1 May 4 16:11:56.275: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 4 16:16:56.284: FAIL: error in waiting for pods to come up: failed to wait for pods running: [timed out waiting for the condition] Unexpected error: <*errors.errorString | 0xc002f30ce0>: { s: "failed to wait for pods running: [timed out waiting for the condition]", } failed to wait for pods running: [timed out waiting for the condition] occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apps.testRollingUpdateDeployment(0xc000d24160) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:289 +0x608 k8s.io/kubernetes/test/e2e/apps.glob..func4.4() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:92 +0x2a k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000871980) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc000871980) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc000871980, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 May 4 16:16:56.288: INFO: Log out all the ReplicaSets if there is no deployment created May 4 16:16:56.291: INFO: ReplicaSet "test-rolling-update-controller": &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-5072 /apis/apps/v1/namespaces/deployment-5072/replicasets/test-rolling-update-controller ca1f2e01-6299-4302-b21b-5e09e769fec1 35159 1 2021-05-04 16:11:51 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/revision:3546343826724305832] [] [] [{e2e.test Update apps/v1 2021-05-04 16:11:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-05-04 16:11:51 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005869e68 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 4 16:16:56.294: INFO: pod: "test-rolling-update-controller-9v9w8": &Pod{ObjectMeta:{test-rolling-update-controller-9v9w8 test-rolling-update-controller- deployment-5072 /api/v1/namespaces/deployment-5072/pods/test-rolling-update-controller-9v9w8 63c021ab-22b6-46b7-86f9-c5d59dcc02b2 36212 0 2021-05-04 16:11:51 +0000 UTC map[name:sample-pod pod:httpd] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.199" ], "mac": "12:9c:6b:f4:5b:d3", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.199" ], "mac": "12:9c:6b:f4:5b:d3", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-rolling-update-controller ca1f2e01-6299-4302-b21b-5e09e769fec1 0xc0072541c7 0xc0072541c8}] [] [{kube-controller-manager Update v1 2021-05-04 16:11:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca1f2e01-6299-4302-b21b-5e09e769fec1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-05-04 16:11:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.199\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}} {multus Update v1 2021-05-04 16:11:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-65586,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-65586,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-65586,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:11:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:11:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:11:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:11:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.3.199,StartTime:2021-05-04 16:11:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ImagePullBackOff,Message:Back-off pulling image "docker.io/library/httpd:2.4.38-alpine",},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.199,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "deployment-5072". STEP: Found 10 events. May 4 16:16:56.297: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for test-rolling-update-controller-9v9w8: { } Scheduled: Successfully assigned deployment-5072/test-rolling-update-controller-9v9w8 to node2 May 4 16:16:56.297: INFO: At 2021-05-04 16:11:51 +0000 UTC - event for test-rolling-update-controller: {replicaset-controller } SuccessfulCreate: Created pod: test-rolling-update-controller-9v9w8 May 4 16:16:56.297: INFO: At 2021-05-04 16:11:53 +0000 UTC - event for test-rolling-update-controller-9v9w8: {multus } AddedInterface: Add eth0 [10.244.3.198/24] May 4 16:16:56.297: INFO: At 2021-05-04 16:11:53 +0000 UTC - event for test-rolling-update-controller-9v9w8: {kubelet node2} Pulling: Pulling image "docker.io/library/httpd:2.4.38-alpine" May 4 16:16:56.297: INFO: At 2021-05-04 16:11:54 +0000 UTC - event for test-rolling-update-controller-9v9w8: {kubelet node2} Failed: Error: ErrImagePull May 4 16:16:56.297: INFO: At 2021-05-04 16:11:54 +0000 UTC - event for test-rolling-update-controller-9v9w8: {kubelet node2} Failed: Failed to pull image "docker.io/library/httpd:2.4.38-alpine": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 4 16:16:56.297: INFO: At 2021-05-04 16:11:55 +0000 UTC - event for test-rolling-update-controller-9v9w8: {kubelet node2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 4 16:16:56.297: INFO: At 2021-05-04 16:11:57 +0000 UTC - event for test-rolling-update-controller-9v9w8: {multus } AddedInterface: Add eth0 [10.244.3.199/24] May 4 16:16:56.297: INFO: At 2021-05-04 16:11:57 +0000 UTC - event for test-rolling-update-controller-9v9w8: {kubelet node2} BackOff: Back-off pulling image "docker.io/library/httpd:2.4.38-alpine" May 4 16:16:56.297: INFO: At 2021-05-04 16:11:57 +0000 UTC - event for test-rolling-update-controller-9v9w8: {kubelet node2} Failed: Error: ImagePullBackOff May 4 16:16:56.299: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:16:56.299: INFO: test-rolling-update-controller-9v9w8 node2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:11:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:11:51 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:11:51 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:11:51 +0000 UTC }] May 4 16:16:56.299: INFO: May 4 16:16:56.303: INFO: Logging node info for node master1 May 4 16:16:56.305: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 db982204-549e-4532-90a7-a4410878cfc9 36800 0 2021-05-04 14:43:01 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"3e:f0:43:cb:66:52"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-04 14:51:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:46 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:48 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:48 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:48 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:16:48 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:88a0771919594d4187f6704fc7592bf8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8e0a253b-2aa4-4467-879e-567e7ba1ffa4,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:16:56.306: INFO: Logging kubelet events for node master1 May 4 16:16:56.308: INFO: Logging pods the kubelet thinks is on node master1 May 4 16:16:56.330: INFO: kube-flannel-qspzk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:16:56.330: INFO: Init container install-cni ready: true, restart count 0 May 4 16:16:56.330: INFO: Container kube-flannel ready: true, restart count 3 May 4 16:16:56.330: INFO: kube-multus-ds-amd64-jflvf started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.330: INFO: Container kube-multus ready: true, restart count 1 May 4 16:16:56.330: INFO: coredns-7677f9bb54-qvcd2 started at 2021-05-04 14:46:11 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.330: INFO: Container coredns ready: true, restart count 1 May 4 16:16:56.330: INFO: node-feature-discovery-controller-5bf5c49849-72rn6 started at 2021-05-04 14:51:52 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.330: INFO: Container nfd-controller ready: true, restart count 0 May 4 16:16:56.330: INFO: kube-apiserver-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.330: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:16:56.330: INFO: kube-controller-manager-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.330: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:16:56.330: INFO: kube-proxy-8j6ch started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.330: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:16:56.330: INFO: docker-registry-docker-registry-56cbc7bc58-zhf8t started at 2021-05-04 14:48:42 +0000 UTC (0+2 container statuses recorded) May 4 16:16:56.330: INFO: Container docker-registry ready: true, restart count 0 May 4 16:16:56.330: INFO: Container nginx ready: true, restart count 0 May 4 16:16:56.330: INFO: node-exporter-jckjs started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:16:56.330: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:16:56.330: INFO: Container node-exporter ready: true, restart count 0 May 4 16:16:56.330: INFO: kube-scheduler-master1 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.330: INFO: Container kube-scheduler ready: true, restart count 0 W0504 16:16:56.344630 35 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:16:56.370: INFO: Latency metrics for node master1 May 4 16:16:56.370: INFO: Logging node info for node master2 May 4 16:16:56.372: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 e2c15170-247b-4e7b-b818-abc807948bf8 36792 0 2021-05-04 14:43:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:e0:10:a0:e0:62"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:32 +0000 UTC,LastTransitionTime:2021-05-04 14:47:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:47 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:47 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:47 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:16:47 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6af568f56589422a9bd68e0270ce0f8c,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:bf27bb77-fad2-4b52-85c3-acb5113fc512,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:16:56.372: INFO: Logging kubelet events for node master2 May 4 16:16:56.374: INFO: Logging pods the kubelet thinks is on node master2 May 4 16:16:56.380: INFO: kube-apiserver-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.380: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:16:56.380: INFO: kube-controller-manager-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.380: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:16:56.381: INFO: kube-scheduler-master2 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.381: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:16:56.381: INFO: kube-proxy-6b5t8 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.381: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:16:56.381: INFO: kube-flannel-cxdfr started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:16:56.381: INFO: Init container install-cni ready: true, restart count 0 May 4 16:16:56.381: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:16:56.381: INFO: kube-multus-ds-amd64-dw8tg started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.381: INFO: Container kube-multus ready: true, restart count 1 May 4 16:16:56.381: INFO: dns-autoscaler-5b7b5c9b6f-zbrsq started at 2021-05-04 14:46:08 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.381: INFO: Container autoscaler ready: true, restart count 1 May 4 16:16:56.381: INFO: node-exporter-9c6qf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:16:56.381: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:16:56.381: INFO: Container node-exporter ready: true, restart count 0 W0504 16:16:56.393521 35 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:16:56.421: INFO: Latency metrics for node master2 May 4 16:16:56.421: INFO: Logging node info for node master3 May 4 16:16:56.424: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 b533a646-667e-403c-944b-71dec9cc4851 36791 0 2021-05-04 14:43:51 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:b0:53:14:f6:c9"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:48:50 +0000 UTC,LastTransitionTime:2021-05-04 14:48:50 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:47 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:47 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:47 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:16:47 +0000 UTC,LastTransitionTime:2021-05-04 14:45:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:959373dcda56494486f0c2bb0bb496cc,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:0714ca81-c21e-40d6-a288-48d597238e54,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:16:56.424: INFO: Logging kubelet events for node master3 May 4 16:16:56.427: INFO: Logging pods the kubelet thinks is on node master3 May 4 16:16:56.436: INFO: kube-controller-manager-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.436: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:16:56.436: INFO: kube-scheduler-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.436: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:16:56.436: INFO: kube-proxy-2p5b6 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.436: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:16:56.436: INFO: kube-flannel-wznt8 started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:16:56.436: INFO: Init container install-cni ready: true, restart count 0 May 4 16:16:56.436: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:16:56.436: INFO: kube-multus-ds-amd64-cgwz2 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.436: INFO: Container kube-multus ready: true, restart count 1 May 4 16:16:56.436: INFO: coredns-7677f9bb54-pshfb started at 2021-05-04 14:46:06 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.436: INFO: Container coredns ready: true, restart count 1 May 4 16:16:56.436: INFO: node-exporter-wvppn started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:16:56.436: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:16:56.436: INFO: Container node-exporter ready: true, restart count 0 May 4 16:16:56.436: INFO: kube-apiserver-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.436: INFO: Container kube-apiserver ready: true, restart count 0 W0504 16:16:56.447320 35 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:16:56.474: INFO: Latency metrics for node master3 May 4 16:16:56.474: INFO: Logging node info for node node1 May 4 16:16:56.478: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 b8cf7e16-d5c7-4e2c-996a-93d93bd4fa1c 36832 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"e2:50:df:03:d2:13"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:54:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:50 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:50 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:50 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:16:50 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bffc023a4ab84df0b0181bc7b8f509e2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:dc08af72-abca-4f1d-bd0f-0e8d8eb97de5,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[@ :],SizeBytes:1002569035,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:371dc6bf7e0c7ce112a29341b000c40d840aef1dbb4fdcb3ae5c0597e28f3061 golang:alpine3.12],SizeBytes:301097267,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:16:56.479: INFO: Logging kubelet events for node node1 May 4 16:16:56.481: INFO: Logging pods the kubelet thinks is on node node1 May 4 16:16:56.496: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.496: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:16:56.496: INFO: affinity-nodeport-transition-hn44d started at 2021-05-04 16:15:06 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.496: INFO: Container affinity-nodeport-transition ready: true, restart count 0 May 4 16:16:56.496: INFO: server started at 2021-05-04 16:15:55 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.496: INFO: Container agnhost-container ready: true, restart count 0 May 4 16:16:56.496: INFO: nginx-proxy-node1 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.496: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:16:56.496: INFO: kube-proxy-t2mbn started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.496: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:16:56.497: INFO: liveness-http started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.497: INFO: Container liveness-http ready: false, restart count 17 May 4 16:16:56.497: INFO: server-envvars-e2e8d4b8-6525-4f40-9a98-8cccf5c227b4 started at 2021-05-04 16:10:40 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.497: INFO: Container srv ready: true, restart count 0 May 4 16:16:56.497: INFO: cmk-init-discover-node1-m8vvw started at 2021-05-04 14:54:32 +0000 UTC (0+3 container statuses recorded) May 4 16:16:56.497: INFO: Container discover ready: false, restart count 0 May 4 16:16:56.497: INFO: Container init ready: false, restart count 0 May 4 16:16:56.497: INFO: Container install ready: false, restart count 0 May 4 16:16:56.497: INFO: netserver-0 started at 2021-05-04 16:16:20 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.497: INFO: Container webserver ready: false, restart count 0 May 4 16:16:56.497: INFO: kube-multus-ds-amd64-pkmbz started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.497: INFO: Container kube-multus ready: true, restart count 1 May 4 16:16:56.497: INFO: ss2-0 started at 2021-05-04 16:09:26 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.497: INFO: Container webserver ready: false, restart count 0 May 4 16:16:56.497: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.497: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 16:16:56.497: INFO: cmk-slg76 started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:16:56.497: INFO: Container nodereport ready: true, restart count 0 May 4 16:16:56.497: INFO: Container reconcile ready: true, restart count 0 May 4 16:16:56.497: INFO: prometheus-k8s-0 started at 2021-05-04 14:56:12 +0000 UTC (0+5 container statuses recorded) May 4 16:16:56.497: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:16:56.497: INFO: Container grafana ready: true, restart count 0 May 4 16:16:56.497: INFO: Container prometheus ready: true, restart count 1 May 4 16:16:56.497: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:16:56.497: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 4 16:16:56.497: INFO: kube-flannel-d6pbl started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:16:56.497: INFO: Init container install-cni ready: true, restart count 2 May 4 16:16:56.497: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:16:56.497: INFO: node-feature-discovery-worker-wfgl5 started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.497: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:16:56.497: INFO: prometheus-operator-5bb8cb9d8f-rrrhf started at 2021-05-04 14:56:03 +0000 UTC (0+2 container statuses recorded) May 4 16:16:56.497: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:16:56.497: INFO: Container prometheus-operator ready: true, restart count 0 May 4 16:16:56.497: INFO: node-exporter-k8qd9 started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:16:56.497: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:16:56.497: INFO: Container node-exporter ready: true, restart count 0 May 4 16:16:56.497: INFO: collectd-4755t started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:16:56.497: INFO: Container collectd ready: true, restart count 0 May 4 16:16:56.497: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:16:56.497: INFO: Container rbac-proxy ready: true, restart count 0 W0504 16:16:56.510203 35 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:16:56.542: INFO: Latency metrics for node node1 May 4 16:16:56.542: INFO: Logging node info for node node2 May 4 16:16:56.545: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 41567fa7-bb24-4381-9387-e4115195037d 36831 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:98:f5:3b:98:5c"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:50 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:50 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:16:50 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:16:50 +0000 UTC,LastTransitionTime:2021-05-04 14:45:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d47a1c7ab17f44f2ae7ff788700a8d74,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:df3523a1-a74f-4f8b-beb1-29f5ed8699f3,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:16:56.545: INFO: Logging kubelet events for node node2 May 4 16:16:56.547: INFO: Logging pods the kubelet thinks is on node node2 May 4 16:16:56.563: INFO: node-exporter-5lghf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:16:56.563: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:16:56.563: INFO: Container node-exporter ready: true, restart count 0 May 4 16:16:56.563: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x started at 2021-05-04 14:59:02 +0000 UTC (0+2 container statuses recorded) May 4 16:16:56.563: INFO: Container tas-controller ready: true, restart count 0 May 4 16:16:56.563: INFO: Container tas-extender ready: true, restart count 0 May 4 16:16:56.563: INFO: affinity-nodeport-transition-kqrgt started at 2021-05-04 16:15:06 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.563: INFO: Container affinity-nodeport-transition ready: true, restart count 0 May 4 16:16:56.563: INFO: client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49 started at 2021-05-04 16:15:46 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.563: INFO: Container env3cont ready: false, restart count 0 May 4 16:16:56.563: INFO: liveness-exec started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.563: INFO: Container liveness-exec ready: false, restart count 6 May 4 16:16:56.563: INFO: netserver-1 started at 2021-05-04 16:16:20 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.563: INFO: Container webserver ready: false, restart count 0 May 4 16:16:56.563: INFO: kube-proxy-rfjjf started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.563: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:16:56.563: INFO: node-feature-discovery-worker-jzjqs started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.563: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:16:56.563: INFO: test-container-pod started at 2021-05-04 16:16:42 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.563: INFO: Container webserver ready: false, restart count 0 May 4 16:16:56.563: INFO: termination-message-container10779dbf-3a4f-48c1-86c0-3b0ea708da7c started at 2021-05-04 16:16:44 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.563: INFO: Container termination-message-container ready: false, restart count 0 May 4 16:16:56.563: INFO: kube-multus-ds-amd64-7r2s4 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.563: INFO: Container kube-multus ready: true, restart count 1 May 4 16:16:56.563: INFO: pod-update-58c100c1-80db-41aa-82d8-3e236dfc5b91 started at 2021-05-04 16:16:21 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.563: INFO: Container nginx ready: false, restart count 0 May 4 16:16:56.563: INFO: cmk-init-discover-node2-zlxzj started at 2021-05-04 14:54:52 +0000 UTC (0+3 container statuses recorded) May 4 16:16:56.563: INFO: Container discover ready: false, restart count 0 May 4 16:16:56.563: INFO: Container init ready: false, restart count 0 May 4 16:16:56.563: INFO: Container install ready: false, restart count 0 May 4 16:16:56.563: INFO: collectd-dhwfp started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:16:56.563: INFO: Container collectd ready: true, restart count 0 May 4 16:16:56.563: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:16:56.563: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:16:56.563: INFO: execpod-affinityp2lx7 started at 2021-05-04 16:15:12 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.563: INFO: Container agnhost-container ready: true, restart count 0 May 4 16:16:56.563: INFO: e2e-test-httpd-pod started at 2021-05-04 16:11:06 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.563: INFO: Container e2e-test-httpd-pod ready: false, restart count 0 May 4 16:16:56.563: INFO: pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59 started at 2021-05-04 16:14:12 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.564: INFO: Container env-test ready: false, restart count 0 May 4 16:16:56.564: INFO: tester started at 2021-05-04 16:15:59 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.564: INFO: Container tester ready: false, restart count 0 May 4 16:16:56.564: INFO: nginx-proxy-node2 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.564: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:16:56.564: INFO: cmk-2fmbx started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:16:56.564: INFO: Container nodereport ready: true, restart count 0 May 4 16:16:56.564: INFO: Container reconcile ready: true, restart count 0 May 4 16:16:56.564: INFO: affinity-nodeport-transition-qr9hq started at 2021-05-04 16:15:06 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.564: INFO: Container affinity-nodeport-transition ready: true, restart count 0 May 4 16:16:56.564: INFO: sample-crd-conversion-webhook-deployment-85d57b96d6-fnqmz started at 2021-05-04 16:16:46 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.564: INFO: Container sample-crd-conversion-webhook ready: true, restart count 0 May 4 16:16:56.564: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.564: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 4 16:16:56.564: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.564: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:16:56.564: INFO: test-rolling-update-controller-9v9w8 started at 2021-05-04 16:11:51 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.564: INFO: Container httpd ready: false, restart count 0 May 4 16:16:56.564: INFO: ss2-1 started at 2021-05-04 16:08:40 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.564: INFO: Container webserver ready: true, restart count 0 May 4 16:16:56.564: INFO: kube-flannel-lnwkk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:16:56.564: INFO: Init container install-cni ready: true, restart count 2 May 4 16:16:56.564: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:16:56.564: INFO: cmk-webhook-6c9d5f8578-fr595 started at 2021-05-04 14:55:15 +0000 UTC (0+1 container statuses recorded) May 4 16:16:56.564: INFO: Container cmk-webhook ready: true, restart count 0 W0504 16:16:56.577538 35 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:16:56.619: INFO: Latency metrics for node node2 May 4 16:16:56.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5072" for this suite. • Failure [305.389 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:16:56.284: error in waiting for pods to come up: failed to wait for pods running: [timed out waiting for the condition] Unexpected error: <*errors.errorString | 0xc002f30ce0>: { s: "failed to wait for pods running: [timed out waiting for the condition]", } failed to wait for pods running: [timed out waiting for the condition] occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:289 ------------------------------ {"msg":"FAILED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":19,"skipped":345,"failed":1,"failures":["[sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:16:46.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 4 16:16:46.807: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 4 16:16:48.817: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741806, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741806, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741806, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741806, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 16:16:50.820: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741806, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741806, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741806, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755741806, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 4 16:16:53.828: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:16:53.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:16:59.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-5754" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:13.498 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":20,"skipped":341,"failed":1,"failures":["[sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]"]} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:17:00.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching May 4 16:17:00.693: INFO: starting watch STEP: patching STEP: updating May 4 16:17:00.700: INFO: waiting for watch events with expected annotations May 4 16:17:00.700: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:17:00.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-2289" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":21,"skipped":353,"failed":1,"failures":["[sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:17:00.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create deployment with httpd image May 4 16:17:00.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6011 create -f -' May 4 16:17:01.120: INFO: stderr: "" May 4 16:17:01.120: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image May 4 16:17:01.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6011 diff -f -' May 4 16:17:01.542: INFO: rc: 1 May 4 16:17:01.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6011 delete -f -' May 4 16:17:01.658: INFO: stderr: "" May 4 16:17:01.658: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:17:01.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6011" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":22,"skipped":369,"failed":1,"failures":["[sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:15:06.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-2406 STEP: creating service affinity-nodeport-transition in namespace services-2406 STEP: creating replication controller affinity-nodeport-transition in namespace services-2406 I0504 16:15:06.617297 29 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-2406, replica count: 3 I0504 16:15:09.667998 29 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0504 16:15:12.668633 29 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 4 16:15:12.677: INFO: Creating new exec pod May 4 16:15:17.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' May 4 16:15:17.972: INFO: stderr: "+ nc -zv -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" May 4 16:15:17.973: INFO: stdout: "" May 4 16:15:17.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.233.26.240 80' May 4 16:15:18.241: INFO: stderr: "+ nc -zv -t -w 2 10.233.26.240 80\nConnection to 10.233.26.240 80 port [tcp/http] succeeded!\n" May 4 16:15:18.241: INFO: stdout: "" May 4 16:15:18.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:18.500: INFO: rc: 1 May 4 16:15:18.500: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:19.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:19.768: INFO: rc: 1 May 4 16:15:19.768: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:20.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:20.745: INFO: rc: 1 May 4 16:15:20.745: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:21.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:21.791: INFO: rc: 1 May 4 16:15:21.791: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:22.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:22.766: INFO: rc: 1 May 4 16:15:22.766: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:23.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:23.766: INFO: rc: 1 May 4 16:15:23.766: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:24.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:24.758: INFO: rc: 1 May 4 16:15:24.758: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:25.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:25.785: INFO: rc: 1 May 4 16:15:25.785: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:26.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:26.773: INFO: rc: 1 May 4 16:15:26.773: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:27.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:28.095: INFO: rc: 1 May 4 16:15:28.095: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:28.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:28.746: INFO: rc: 1 May 4 16:15:28.746: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:29.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:29.858: INFO: rc: 1 May 4 16:15:29.858: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:30.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:30.761: INFO: rc: 1 May 4 16:15:30.761: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:31.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:31.777: INFO: rc: 1 May 4 16:15:31.778: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:32.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:32.775: INFO: rc: 1 May 4 16:15:32.775: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:33.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:33.759: INFO: rc: 1 May 4 16:15:33.759: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:34.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:34.761: INFO: rc: 1 May 4 16:15:34.761: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:35.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:35.775: INFO: rc: 1 May 4 16:15:35.775: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:36.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:36.771: INFO: rc: 1 May 4 16:15:36.771: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:37.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:37.797: INFO: rc: 1 May 4 16:15:37.797: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:38.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:38.762: INFO: rc: 1 May 4 16:15:38.762: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:39.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:39.771: INFO: rc: 1 May 4 16:15:39.771: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:40.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:40.750: INFO: rc: 1 May 4 16:15:40.750: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:41.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:41.760: INFO: rc: 1 May 4 16:15:41.760: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:42.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:42.786: INFO: rc: 1 May 4 16:15:42.786: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:43.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:43.766: INFO: rc: 1 May 4 16:15:43.766: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:44.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:44.778: INFO: rc: 1 May 4 16:15:44.778: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:45.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:45.781: INFO: rc: 1 May 4 16:15:45.781: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:46.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:46.759: INFO: rc: 1 May 4 16:15:46.759: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:47.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:47.801: INFO: rc: 1 May 4 16:15:47.801: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:48.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:48.750: INFO: rc: 1 May 4 16:15:48.751: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:49.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:49.782: INFO: rc: 1 May 4 16:15:49.782: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:50.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:50.766: INFO: rc: 1 May 4 16:15:50.766: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:51.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:51.776: INFO: rc: 1 May 4 16:15:51.776: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:52.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:52.769: INFO: rc: 1 May 4 16:15:52.769: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:53.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:53.762: INFO: rc: 1 May 4 16:15:53.762: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:54.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:54.750: INFO: rc: 1 May 4 16:15:54.750: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:55.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:55.768: INFO: rc: 1 May 4 16:15:55.768: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:56.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:56.762: INFO: rc: 1 May 4 16:15:56.763: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:57.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:58.156: INFO: rc: 1 May 4 16:15:58.156: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:58.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:58.762: INFO: rc: 1 May 4 16:15:58.762: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:15:59.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:15:59.819: INFO: rc: 1 May 4 16:15:59.820: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:00.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:01.123: INFO: rc: 1 May 4 16:16:01.123: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:01.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:01.750: INFO: rc: 1 May 4 16:16:01.750: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:02.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:02.767: INFO: rc: 1 May 4 16:16:02.767: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:03.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:03.825: INFO: rc: 1 May 4 16:16:03.825: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:04.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:04.776: INFO: rc: 1 May 4 16:16:04.777: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:05.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:05.764: INFO: rc: 1 May 4 16:16:05.764: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:06.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:06.760: INFO: rc: 1 May 4 16:16:06.760: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:07.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:07.747: INFO: rc: 1 May 4 16:16:07.747: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:08.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:08.762: INFO: rc: 1 May 4 16:16:08.762: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:09.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:09.769: INFO: rc: 1 May 4 16:16:09.769: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:10.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:10.760: INFO: rc: 1 May 4 16:16:10.760: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:11.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:11.767: INFO: rc: 1 May 4 16:16:11.767: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:12.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:12.740: INFO: rc: 1 May 4 16:16:12.740: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:13.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:13.768: INFO: rc: 1 May 4 16:16:13.769: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:14.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:14.824: INFO: rc: 1 May 4 16:16:14.824: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:15.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:15.779: INFO: rc: 1 May 4 16:16:15.779: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:16.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:16.774: INFO: rc: 1 May 4 16:16:16.774: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:17.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:17.760: INFO: rc: 1 May 4 16:16:17.760: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:18.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:18.769: INFO: rc: 1 May 4 16:16:18.769: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:19.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:19.776: INFO: rc: 1 May 4 16:16:19.776: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:20.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:20.893: INFO: rc: 1 May 4 16:16:20.893: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:21.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:21.784: INFO: rc: 1 May 4 16:16:21.785: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:22.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:22.769: INFO: rc: 1 May 4 16:16:22.769: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:23.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:23.759: INFO: rc: 1 May 4 16:16:23.759: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:24.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:24.756: INFO: rc: 1 May 4 16:16:24.756: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:25.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:25.767: INFO: rc: 1 May 4 16:16:25.767: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:26.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:26.748: INFO: rc: 1 May 4 16:16:26.748: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:27.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:27.770: INFO: rc: 1 May 4 16:16:27.770: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:28.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:28.770: INFO: rc: 1 May 4 16:16:28.770: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:29.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:29.805: INFO: rc: 1 May 4 16:16:29.805: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:30.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:30.761: INFO: rc: 1 May 4 16:16:30.761: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:31.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:31.754: INFO: rc: 1 May 4 16:16:31.754: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:32.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:32.771: INFO: rc: 1 May 4 16:16:32.771: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:33.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:33.791: INFO: rc: 1 May 4 16:16:33.791: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:34.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:34.758: INFO: rc: 1 May 4 16:16:34.758: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:35.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:35.791: INFO: rc: 1 May 4 16:16:35.791: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:36.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:36.773: INFO: rc: 1 May 4 16:16:36.774: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:37.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:37.764: INFO: rc: 1 May 4 16:16:37.764: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:38.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:38.771: INFO: rc: 1 May 4 16:16:38.772: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:39.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:39.764: INFO: rc: 1 May 4 16:16:39.764: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:40.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:40.745: INFO: rc: 1 May 4 16:16:40.745: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:41.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:41.766: INFO: rc: 1 May 4 16:16:41.766: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:42.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:42.922: INFO: rc: 1 May 4 16:16:42.922: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:43.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:43.764: INFO: rc: 1 May 4 16:16:43.764: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:44.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:44.833: INFO: rc: 1 May 4 16:16:44.833: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:45.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:45.782: INFO: rc: 1 May 4 16:16:45.782: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:46.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:46.752: INFO: rc: 1 May 4 16:16:46.752: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:47.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:48.424: INFO: rc: 1 May 4 16:16:48.424: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:48.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:48.743: INFO: rc: 1 May 4 16:16:48.743: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:49.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:50.132: INFO: rc: 1 May 4 16:16:50.132: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:50.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:50.775: INFO: rc: 1 May 4 16:16:50.775: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:51.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:51.764: INFO: rc: 1 May 4 16:16:51.764: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:52.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:52.779: INFO: rc: 1 May 4 16:16:52.779: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:53.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:53.741: INFO: rc: 1 May 4 16:16:53.741: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:54.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:54.781: INFO: rc: 1 May 4 16:16:54.781: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:55.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:55.745: INFO: rc: 1 May 4 16:16:55.745: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:56.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:56.771: INFO: rc: 1 May 4 16:16:56.771: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:57.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:58.219: INFO: rc: 1 May 4 16:16:58.219: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:58.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:58.953: INFO: rc: 1 May 4 16:16:58.953: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:16:59.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:16:59.772: INFO: rc: 1 May 4 16:16:59.772: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:17:00.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:17:01.271: INFO: rc: 1 May 4 16:17:01.271: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:17:01.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:17:01.887: INFO: rc: 1 May 4 16:17:01.887: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:17:02.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:17:02.951: INFO: rc: 1 May 4 16:17:02.951: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:17:03.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:17:04.236: INFO: rc: 1 May 4 16:17:04.236: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:17:04.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:17:04.762: INFO: rc: 1 May 4 16:17:04.762: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:17:05.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:17:05.742: INFO: rc: 1 May 4 16:17:05.742: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:17:06.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:17:06.770: INFO: rc: 1 May 4 16:17:06.770: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:17:07.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:17:07.747: INFO: rc: 1 May 4 16:17:07.747: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:17:08.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:17:08.769: INFO: rc: 1 May 4 16:17:08.769: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:17:09.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:17:09.748: INFO: rc: 1 May 4 16:17:09.748: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:17:10.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:17:10.775: INFO: rc: 1 May 4 16:17:10.776: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:17:11.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:17:11.761: INFO: rc: 1 May 4 16:17:11.761: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:17:12.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:17:12.760: INFO: rc: 1 May 4 16:17:12.760: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:17:13.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:17:13.747: INFO: rc: 1 May 4 16:17:13.747: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:17:14.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:17:14.766: INFO: rc: 1 May 4 16:17:14.766: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:17:15.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:17:15.770: INFO: rc: 1 May 4 16:17:15.770: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:17:16.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:17:16.771: INFO: rc: 1 May 4 16:17:16.771: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:17:17.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:17:17.784: INFO: rc: 1 May 4 16:17:17.784: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:17:18.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:17:18.747: INFO: rc: 1 May 4 16:17:18.747: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:17:18.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116' May 4 16:17:19.027: INFO: rc: 1 May 4 16:17:19.027: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2406 exec execpod-affinityp2lx7 -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30116: Command stdout: stderr: + nc -zv -t -w 2 10.10.190.207 30116 nc: connect to 10.10.190.207 port 30116 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 4 16:17:19.028: FAIL: Unexpected error: <*errors.errorString | 0xc002140290>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30116 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30116 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc001131ce0, 0x54075e0, 0xc0037dd080, 0xc000d27680, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3511 +0x62e k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithTransition(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3466 k8s.io/kubernetes/test/e2e/network.glob..func24.30() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2541 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc003576d80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc003576d80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc003576d80, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 May 4 16:17:19.029: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-2406, will wait for the garbage collector to delete the pods May 4 16:17:19.095: INFO: Deleting ReplicationController affinity-nodeport-transition took: 5.645304ms May 4 16:17:19.195: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 100.256505ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "services-2406". STEP: Found 31 events. May 4 16:17:30.013: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-transition-hn44d: { } Scheduled: Successfully assigned services-2406/affinity-nodeport-transition-hn44d to node1 May 4 16:17:30.013: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-transition-kqrgt: { } Scheduled: Successfully assigned services-2406/affinity-nodeport-transition-kqrgt to node2 May 4 16:17:30.013: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-transition-qr9hq: { } Scheduled: Successfully assigned services-2406/affinity-nodeport-transition-qr9hq to node2 May 4 16:17:30.013: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod-affinityp2lx7: { } Scheduled: Successfully assigned services-2406/execpod-affinityp2lx7 to node2 May 4 16:17:30.013: INFO: At 2021-05-04 16:15:06 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-kqrgt May 4 16:17:30.013: INFO: At 2021-05-04 16:15:06 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-hn44d May 4 16:17:30.013: INFO: At 2021-05-04 16:15:06 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-qr9hq May 4 16:17:30.013: INFO: At 2021-05-04 16:15:08 +0000 UTC - event for affinity-nodeport-transition-hn44d: {multus } AddedInterface: Add eth0 [10.244.4.150/24] May 4 16:17:30.013: INFO: At 2021-05-04 16:15:08 +0000 UTC - event for affinity-nodeport-transition-hn44d: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.20" May 4 16:17:30.013: INFO: At 2021-05-04 16:15:08 +0000 UTC - event for affinity-nodeport-transition-kqrgt: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.20" May 4 16:17:30.013: INFO: At 2021-05-04 16:15:08 +0000 UTC - event for affinity-nodeport-transition-kqrgt: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.20" in 481.967215ms May 4 16:17:30.013: INFO: At 2021-05-04 16:15:08 +0000 UTC - event for affinity-nodeport-transition-kqrgt: {kubelet node2} Created: Created container affinity-nodeport-transition May 4 16:17:30.013: INFO: At 2021-05-04 16:15:08 +0000 UTC - event for affinity-nodeport-transition-kqrgt: {kubelet node2} Started: Started container affinity-nodeport-transition May 4 16:17:30.013: INFO: At 2021-05-04 16:15:08 +0000 UTC - event for affinity-nodeport-transition-kqrgt: {multus } AddedInterface: Add eth0 [10.244.3.202/24] May 4 16:17:30.013: INFO: At 2021-05-04 16:15:08 +0000 UTC - event for affinity-nodeport-transition-qr9hq: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.20" May 4 16:17:30.013: INFO: At 2021-05-04 16:15:08 +0000 UTC - event for affinity-nodeport-transition-qr9hq: {multus } AddedInterface: Add eth0 [10.244.3.203/24] May 4 16:17:30.013: INFO: At 2021-05-04 16:15:09 +0000 UTC - event for affinity-nodeport-transition-hn44d: {kubelet node1} Started: Started container affinity-nodeport-transition May 4 16:17:30.013: INFO: At 2021-05-04 16:15:09 +0000 UTC - event for affinity-nodeport-transition-hn44d: {kubelet node1} Created: Created container affinity-nodeport-transition May 4 16:17:30.013: INFO: At 2021-05-04 16:15:09 +0000 UTC - event for affinity-nodeport-transition-hn44d: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.20" in 1.472665262s May 4 16:17:30.013: INFO: At 2021-05-04 16:15:09 +0000 UTC - event for affinity-nodeport-transition-qr9hq: {kubelet node2} Started: Started container affinity-nodeport-transition May 4 16:17:30.013: INFO: At 2021-05-04 16:15:09 +0000 UTC - event for affinity-nodeport-transition-qr9hq: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.20" in 508.926228ms May 4 16:17:30.013: INFO: At 2021-05-04 16:15:09 +0000 UTC - event for affinity-nodeport-transition-qr9hq: {kubelet node2} Created: Created container affinity-nodeport-transition May 4 16:17:30.013: INFO: At 2021-05-04 16:15:14 +0000 UTC - event for execpod-affinityp2lx7: {kubelet node2} Started: Started container agnhost-container May 4 16:17:30.013: INFO: At 2021-05-04 16:15:14 +0000 UTC - event for execpod-affinityp2lx7: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.20" in 515.387776ms May 4 16:17:30.013: INFO: At 2021-05-04 16:15:14 +0000 UTC - event for execpod-affinityp2lx7: {kubelet node2} Created: Created container agnhost-container May 4 16:17:30.013: INFO: At 2021-05-04 16:15:14 +0000 UTC - event for execpod-affinityp2lx7: {multus } AddedInterface: Add eth0 [10.244.3.204/24] May 4 16:17:30.013: INFO: At 2021-05-04 16:15:14 +0000 UTC - event for execpod-affinityp2lx7: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.20" May 4 16:17:30.013: INFO: At 2021-05-04 16:17:19 +0000 UTC - event for affinity-nodeport-transition-hn44d: {kubelet node1} Killing: Stopping container affinity-nodeport-transition May 4 16:17:30.013: INFO: At 2021-05-04 16:17:19 +0000 UTC - event for affinity-nodeport-transition-kqrgt: {kubelet node2} Killing: Stopping container affinity-nodeport-transition May 4 16:17:30.013: INFO: At 2021-05-04 16:17:19 +0000 UTC - event for affinity-nodeport-transition-qr9hq: {kubelet node2} Killing: Stopping container affinity-nodeport-transition May 4 16:17:30.013: INFO: At 2021-05-04 16:17:19 +0000 UTC - event for execpod-affinityp2lx7: {kubelet node2} Killing: Stopping container agnhost-container May 4 16:17:30.015: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:17:30.015: INFO: May 4 16:17:30.020: INFO: Logging node info for node master1 May 4 16:17:30.022: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 db982204-549e-4532-90a7-a4410878cfc9 37456 0 2021-05-04 14:43:01 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"3e:f0:43:cb:66:52"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-04 14:51:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:46 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:17:28 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:17:28 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:17:28 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:17:28 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:88a0771919594d4187f6704fc7592bf8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8e0a253b-2aa4-4467-879e-567e7ba1ffa4,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:17:30.023: INFO: Logging kubelet events for node master1 May 4 16:17:30.025: INFO: Logging pods the kubelet thinks is on node master1 May 4 16:17:30.035: INFO: kube-apiserver-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.035: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:17:30.035: INFO: kube-controller-manager-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.035: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:17:30.035: INFO: kube-proxy-8j6ch started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.035: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:17:30.035: INFO: docker-registry-docker-registry-56cbc7bc58-zhf8t started at 2021-05-04 14:48:42 +0000 UTC (0+2 container statuses recorded) May 4 16:17:30.035: INFO: Container docker-registry ready: true, restart count 0 May 4 16:17:30.035: INFO: Container nginx ready: true, restart count 0 May 4 16:17:30.035: INFO: node-exporter-jckjs started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:17:30.035: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:17:30.035: INFO: Container node-exporter ready: true, restart count 0 May 4 16:17:30.035: INFO: kube-scheduler-master1 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.035: INFO: Container kube-scheduler ready: true, restart count 0 May 4 16:17:30.035: INFO: kube-flannel-qspzk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:17:30.035: INFO: Init container install-cni ready: true, restart count 0 May 4 16:17:30.035: INFO: Container kube-flannel ready: true, restart count 3 May 4 16:17:30.035: INFO: kube-multus-ds-amd64-jflvf started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.035: INFO: Container kube-multus ready: true, restart count 1 May 4 16:17:30.035: INFO: coredns-7677f9bb54-qvcd2 started at 2021-05-04 14:46:11 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.035: INFO: Container coredns ready: true, restart count 1 May 4 16:17:30.035: INFO: node-feature-discovery-controller-5bf5c49849-72rn6 started at 2021-05-04 14:51:52 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.035: INFO: Container nfd-controller ready: true, restart count 0 W0504 16:17:30.049294 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:17:30.072: INFO: Latency metrics for node master1 May 4 16:17:30.073: INFO: Logging node info for node master2 May 4 16:17:30.076: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 e2c15170-247b-4e7b-b818-abc807948bf8 37455 0 2021-05-04 14:43:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:e0:10:a0:e0:62"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:32 +0000 UTC,LastTransitionTime:2021-05-04 14:47:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:17:27 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:17:27 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:17:27 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:17:27 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6af568f56589422a9bd68e0270ce0f8c,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:bf27bb77-fad2-4b52-85c3-acb5113fc512,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:17:30.076: INFO: Logging kubelet events for node master2 May 4 16:17:30.079: INFO: Logging pods the kubelet thinks is on node master2 May 4 16:17:30.086: INFO: kube-scheduler-master2 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.086: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:17:30.086: INFO: kube-proxy-6b5t8 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.086: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:17:30.086: INFO: kube-flannel-cxdfr started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:17:30.086: INFO: Init container install-cni ready: true, restart count 0 May 4 16:17:30.086: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:17:30.086: INFO: kube-multus-ds-amd64-dw8tg started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.086: INFO: Container kube-multus ready: true, restart count 1 May 4 16:17:30.086: INFO: dns-autoscaler-5b7b5c9b6f-zbrsq started at 2021-05-04 14:46:08 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.086: INFO: Container autoscaler ready: true, restart count 1 May 4 16:17:30.086: INFO: node-exporter-9c6qf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:17:30.086: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:17:30.086: INFO: Container node-exporter ready: true, restart count 0 May 4 16:17:30.086: INFO: kube-apiserver-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.086: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:17:30.086: INFO: kube-controller-manager-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.086: INFO: Container kube-controller-manager ready: true, restart count 2 W0504 16:17:30.099983 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:17:30.123: INFO: Latency metrics for node master2 May 4 16:17:30.123: INFO: Logging node info for node master3 May 4 16:17:30.126: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 b533a646-667e-403c-944b-71dec9cc4851 37450 0 2021-05-04 14:43:51 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:b0:53:14:f6:c9"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:48:50 +0000 UTC,LastTransitionTime:2021-05-04 14:48:50 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:17:27 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:17:27 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:17:27 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:17:27 +0000 UTC,LastTransitionTime:2021-05-04 14:45:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:959373dcda56494486f0c2bb0bb496cc,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:0714ca81-c21e-40d6-a288-48d597238e54,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:17:30.126: INFO: Logging kubelet events for node master3 May 4 16:17:30.129: INFO: Logging pods the kubelet thinks is on node master3 May 4 16:17:30.137: INFO: kube-controller-manager-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.137: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:17:30.137: INFO: kube-scheduler-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.137: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:17:30.137: INFO: kube-proxy-2p5b6 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.137: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:17:30.137: INFO: kube-flannel-wznt8 started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:17:30.137: INFO: Init container install-cni ready: true, restart count 0 May 4 16:17:30.137: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:17:30.137: INFO: kube-multus-ds-amd64-cgwz2 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.137: INFO: Container kube-multus ready: true, restart count 1 May 4 16:17:30.137: INFO: coredns-7677f9bb54-pshfb started at 2021-05-04 14:46:06 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.137: INFO: Container coredns ready: true, restart count 1 May 4 16:17:30.137: INFO: node-exporter-wvppn started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:17:30.137: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:17:30.137: INFO: Container node-exporter ready: true, restart count 0 May 4 16:17:30.137: INFO: kube-apiserver-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.137: INFO: Container kube-apiserver ready: true, restart count 0 W0504 16:17:30.149225 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:17:30.179: INFO: Latency metrics for node master3 May 4 16:17:30.179: INFO: Logging node info for node node1 May 4 16:17:30.182: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 b8cf7e16-d5c7-4e2c-996a-93d93bd4fa1c 37415 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"e2:50:df:03:d2:13"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:54:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:17:22 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:17:22 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:17:22 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:17:22 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bffc023a4ab84df0b0181bc7b8f509e2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:dc08af72-abca-4f1d-bd0f-0e8d8eb97de5,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[@ :],SizeBytes:1002569035,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:371dc6bf7e0c7ce112a29341b000c40d840aef1dbb4fdcb3ae5c0597e28f3061 golang:alpine3.12],SizeBytes:301097267,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:17:30.183: INFO: Logging kubelet events for node node1 May 4 16:17:30.185: INFO: Logging pods the kubelet thinks is on node node1 May 4 16:17:30.202: INFO: kube-flannel-d6pbl started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:17:30.202: INFO: Init container install-cni ready: true, restart count 2 May 4 16:17:30.202: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:17:30.202: INFO: node-feature-discovery-worker-wfgl5 started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.202: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:17:30.202: INFO: prometheus-operator-5bb8cb9d8f-rrrhf started at 2021-05-04 14:56:03 +0000 UTC (0+2 container statuses recorded) May 4 16:17:30.202: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:17:30.202: INFO: Container prometheus-operator ready: true, restart count 0 May 4 16:17:30.202: INFO: node-exporter-k8qd9 started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:17:30.202: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:17:30.202: INFO: Container node-exporter ready: true, restart count 0 May 4 16:17:30.202: INFO: collectd-4755t started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:17:30.202: INFO: Container collectd ready: true, restart count 0 May 4 16:17:30.202: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:17:30.202: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:17:30.202: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.202: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:17:30.202: INFO: server started at 2021-05-04 16:15:55 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.202: INFO: Container agnhost-container ready: true, restart count 0 May 4 16:17:30.202: INFO: nginx-proxy-node1 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.202: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:17:30.202: INFO: kube-proxy-t2mbn started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.202: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:17:30.202: INFO: liveness-http started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.202: INFO: Container liveness-http ready: false, restart count 17 May 4 16:17:30.202: INFO: server-envvars-e2e8d4b8-6525-4f40-9a98-8cccf5c227b4 started at 2021-05-04 16:10:40 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.203: INFO: Container srv ready: true, restart count 0 May 4 16:17:30.203: INFO: simpletest.rc-w5k5v started at 2021-05-04 16:16:56 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.203: INFO: Container nginx ready: false, restart count 0 May 4 16:17:30.203: INFO: cmk-init-discover-node1-m8vvw started at 2021-05-04 14:54:32 +0000 UTC (0+3 container statuses recorded) May 4 16:17:30.203: INFO: Container discover ready: false, restart count 0 May 4 16:17:30.203: INFO: Container init ready: false, restart count 0 May 4 16:17:30.203: INFO: Container install ready: false, restart count 0 May 4 16:17:30.203: INFO: kube-multus-ds-amd64-pkmbz started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.203: INFO: Container kube-multus ready: true, restart count 1 May 4 16:17:30.203: INFO: ss2-0 started at 2021-05-04 16:09:26 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.203: INFO: Container webserver ready: false, restart count 0 May 4 16:17:30.203: INFO: simpletest.rc-rbq26 started at 2021-05-04 16:16:56 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.203: INFO: Container nginx ready: false, restart count 0 May 4 16:17:30.203: INFO: simpletest.rc-jdsnx started at 2021-05-04 16:16:56 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.203: INFO: Container nginx ready: false, restart count 0 May 4 16:17:30.203: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.203: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 16:17:30.203: INFO: cmk-slg76 started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:17:30.203: INFO: Container nodereport ready: true, restart count 0 May 4 16:17:30.203: INFO: Container reconcile ready: true, restart count 0 May 4 16:17:30.203: INFO: prometheus-k8s-0 started at 2021-05-04 14:56:12 +0000 UTC (0+5 container statuses recorded) May 4 16:17:30.203: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:17:30.203: INFO: Container grafana ready: true, restart count 0 May 4 16:17:30.203: INFO: Container prometheus ready: true, restart count 1 May 4 16:17:30.203: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:17:30.203: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 4 16:17:30.203: INFO: simpletest.rc-sx2x5 started at 2021-05-04 16:16:56 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.203: INFO: Container nginx ready: false, restart count 0 May 4 16:17:30.203: INFO: simpletest.rc-kflds started at 2021-05-04 16:16:56 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.203: INFO: Container nginx ready: false, restart count 0 W0504 16:17:30.217188 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:17:30.251: INFO: Latency metrics for node node1 May 4 16:17:30.251: INFO: Logging node info for node node2 May 4 16:17:30.254: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 41567fa7-bb24-4381-9387-e4115195037d 37405 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:98:f5:3b:98:5c"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:17:20 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:17:20 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:17:20 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:17:20 +0000 UTC,LastTransitionTime:2021-05-04 14:45:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d47a1c7ab17f44f2ae7ff788700a8d74,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:df3523a1-a74f-4f8b-beb1-29f5ed8699f3,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:17:30.255: INFO: Logging kubelet events for node node2 May 4 16:17:30.257: INFO: Logging pods the kubelet thinks is on node node2 May 4 16:17:30.272: INFO: node-exporter-5lghf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:17:30.272: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:17:30.272: INFO: Container node-exporter ready: true, restart count 0 May 4 16:17:30.272: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x started at 2021-05-04 14:59:02 +0000 UTC (0+2 container statuses recorded) May 4 16:17:30.272: INFO: Container tas-controller ready: true, restart count 0 May 4 16:17:30.272: INFO: Container tas-extender ready: true, restart count 0 May 4 16:17:30.272: INFO: client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49 started at 2021-05-04 16:15:46 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.272: INFO: Container env3cont ready: false, restart count 0 May 4 16:17:30.272: INFO: liveness-exec started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.272: INFO: Container liveness-exec ready: false, restart count 6 May 4 16:17:30.273: INFO: kube-proxy-rfjjf started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.273: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:17:30.273: INFO: node-feature-discovery-worker-jzjqs started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.273: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:17:30.273: INFO: simpletest.rc-6788j started at 2021-05-04 16:16:56 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.273: INFO: Container nginx ready: false, restart count 0 May 4 16:17:30.273: INFO: termination-message-container10779dbf-3a4f-48c1-86c0-3b0ea708da7c started at 2021-05-04 16:16:44 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.273: INFO: Container termination-message-container ready: false, restart count 0 May 4 16:17:30.273: INFO: kube-multus-ds-amd64-7r2s4 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.273: INFO: Container kube-multus ready: true, restart count 1 May 4 16:17:30.273: INFO: pod-update-58c100c1-80db-41aa-82d8-3e236dfc5b91 started at 2021-05-04 16:16:21 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.273: INFO: Container nginx ready: false, restart count 0 May 4 16:17:30.273: INFO: simpletest.rc-d7kw2 started at 2021-05-04 16:16:56 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.273: INFO: Container nginx ready: false, restart count 0 May 4 16:17:30.273: INFO: cmk-init-discover-node2-zlxzj started at 2021-05-04 14:54:52 +0000 UTC (0+3 container statuses recorded) May 4 16:17:30.273: INFO: Container discover ready: false, restart count 0 May 4 16:17:30.273: INFO: Container init ready: false, restart count 0 May 4 16:17:30.273: INFO: Container install ready: false, restart count 0 May 4 16:17:30.273: INFO: collectd-dhwfp started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:17:30.273: INFO: Container collectd ready: true, restart count 0 May 4 16:17:30.273: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:17:30.273: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:17:30.273: INFO: e2e-test-httpd-pod started at 2021-05-04 16:11:06 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.273: INFO: Container e2e-test-httpd-pod ready: false, restart count 0 May 4 16:17:30.273: INFO: pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59 started at 2021-05-04 16:14:12 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.273: INFO: Container env-test ready: false, restart count 0 May 4 16:17:30.273: INFO: tester started at 2021-05-04 16:15:59 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.273: INFO: Container tester ready: false, restart count 0 May 4 16:17:30.273: INFO: nginx-proxy-node2 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.273: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:17:30.273: INFO: cmk-2fmbx started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:17:30.273: INFO: Container nodereport ready: true, restart count 0 May 4 16:17:30.273: INFO: Container reconcile ready: true, restart count 0 May 4 16:17:30.273: INFO: pod-subpath-test-configmap-86d5 started at 2021-05-04 16:17:01 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.273: INFO: Container test-container-subpath-configmap-86d5 ready: true, restart count 0 May 4 16:17:30.273: INFO: simpletest.rc-kscv6 started at 2021-05-04 16:16:56 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.273: INFO: Container nginx ready: false, restart count 0 May 4 16:17:30.273: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.273: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 4 16:17:30.273: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.273: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:17:30.273: INFO: simpletest.rc-lfvlv started at 2021-05-04 16:16:56 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.273: INFO: Container nginx ready: false, restart count 0 May 4 16:17:30.273: INFO: simpletest.rc-w9qs8 started at 2021-05-04 16:16:56 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.273: INFO: Container nginx ready: false, restart count 0 May 4 16:17:30.273: INFO: ss2-1 started at 2021-05-04 16:08:40 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.273: INFO: Container webserver ready: true, restart count 0 May 4 16:17:30.273: INFO: kube-flannel-lnwkk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:17:30.273: INFO: Init container install-cni ready: true, restart count 2 May 4 16:17:30.273: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:17:30.273: INFO: cmk-webhook-6c9d5f8578-fr595 started at 2021-05-04 14:55:15 +0000 UTC (0+1 container statuses recorded) May 4 16:17:30.273: INFO: Container cmk-webhook ready: true, restart count 0 W0504 16:17:30.286852 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:17:30.316: INFO: Latency metrics for node node2 May 4 16:17:30.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2406" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • Failure [143.744 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:17:19.028: Unexpected error: <*errors.errorString | 0xc002140290>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30116 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30116 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3511 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":17,"skipped":345,"failed":2,"failures":["[sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:17:30.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 4 16:17:30.486: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b33993be-4dc2-47e4-9384-623048fc866c" in namespace "downward-api-2160" to be "Succeeded or Failed" May 4 16:17:30.490: INFO: Pod "downwardapi-volume-b33993be-4dc2-47e4-9384-623048fc866c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.787201ms May 4 16:17:32.493: INFO: Pod "downwardapi-volume-b33993be-4dc2-47e4-9384-623048fc866c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007303419s May 4 16:17:34.500: INFO: Pod "downwardapi-volume-b33993be-4dc2-47e4-9384-623048fc866c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013663873s STEP: Saw pod success May 4 16:17:34.500: INFO: Pod "downwardapi-volume-b33993be-4dc2-47e4-9384-623048fc866c" satisfied condition "Succeeded or Failed" May 4 16:17:34.502: INFO: Trying to get logs from node node2 pod downwardapi-volume-b33993be-4dc2-47e4-9384-623048fc866c container client-container: STEP: delete the pod May 4 16:17:34.564: INFO: Waiting for pod downwardapi-volume-b33993be-4dc2-47e4-9384-623048fc866c to disappear May 4 16:17:34.567: INFO: Pod downwardapi-volume-b33993be-4dc2-47e4-9384-623048fc866c no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:17:34.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2160" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":406,"failed":2,"failures":["[sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SS ------------------------------ [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:17:34.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:17:34.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6857" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":19,"skipped":408,"failed":2,"failures":["[sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:17:01.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-86d5 STEP: Creating a pod to test atomic-volume-subpath May 4 16:17:01.772: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-86d5" in namespace "subpath-9092" to be "Succeeded or Failed" May 4 16:17:01.777: INFO: Pod "pod-subpath-test-configmap-86d5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.637438ms May 4 16:17:03.780: INFO: Pod "pod-subpath-test-configmap-86d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008630475s May 4 16:17:05.783: INFO: Pod "pod-subpath-test-configmap-86d5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011111115s May 4 16:17:07.786: INFO: Pod "pod-subpath-test-configmap-86d5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014507055s May 4 16:17:09.790: INFO: Pod "pod-subpath-test-configmap-86d5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017867988s May 4 16:17:11.793: INFO: Pod "pod-subpath-test-configmap-86d5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.020882223s May 4 16:17:13.796: INFO: Pod "pod-subpath-test-configmap-86d5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.02463908s May 4 16:17:15.800: INFO: Pod "pod-subpath-test-configmap-86d5": Phase="Running", Reason="", readiness=true. Elapsed: 14.028329512s May 4 16:17:17.803: INFO: Pod "pod-subpath-test-configmap-86d5": Phase="Running", Reason="", readiness=true. Elapsed: 16.031140404s May 4 16:17:19.806: INFO: Pod "pod-subpath-test-configmap-86d5": Phase="Running", Reason="", readiness=true. Elapsed: 18.034300068s May 4 16:17:21.810: INFO: Pod "pod-subpath-test-configmap-86d5": Phase="Running", Reason="", readiness=true. Elapsed: 20.037971912s May 4 16:17:23.813: INFO: Pod "pod-subpath-test-configmap-86d5": Phase="Running", Reason="", readiness=true. Elapsed: 22.041651356s May 4 16:17:25.816: INFO: Pod "pod-subpath-test-configmap-86d5": Phase="Running", Reason="", readiness=true. Elapsed: 24.044524397s May 4 16:17:27.819: INFO: Pod "pod-subpath-test-configmap-86d5": Phase="Running", Reason="", readiness=true. Elapsed: 26.047616176s May 4 16:17:29.822: INFO: Pod "pod-subpath-test-configmap-86d5": Phase="Running", Reason="", readiness=true. Elapsed: 28.050453885s May 4 16:17:31.826: INFO: Pod "pod-subpath-test-configmap-86d5": Phase="Running", Reason="", readiness=true. Elapsed: 30.053866756s May 4 16:17:33.831: INFO: Pod "pod-subpath-test-configmap-86d5": Phase="Running", Reason="", readiness=true. Elapsed: 32.058930201s May 4 16:17:35.835: INFO: Pod "pod-subpath-test-configmap-86d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.063623352s STEP: Saw pod success May 4 16:17:35.835: INFO: Pod "pod-subpath-test-configmap-86d5" satisfied condition "Succeeded or Failed" May 4 16:17:35.839: INFO: Trying to get logs from node node2 pod pod-subpath-test-configmap-86d5 container test-container-subpath-configmap-86d5: STEP: delete the pod May 4 16:17:35.852: INFO: Waiting for pod pod-subpath-test-configmap-86d5 to disappear May 4 16:17:35.854: INFO: Pod pod-subpath-test-configmap-86d5 no longer exists STEP: Deleting pod pod-subpath-test-configmap-86d5 May 4 16:17:35.854: INFO: Deleting pod "pod-subpath-test-configmap-86d5" in namespace "subpath-9092" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:17:35.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9092" for this suite. • [SLOW TEST:34.133 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":23,"skipped":408,"failed":1,"failures":["[sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:17:35.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components May 4 16:17:35.919: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend May 4 16:17:35.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7504 create -f -' May 4 16:17:36.250: INFO: stderr: "" May 4 16:17:36.250: INFO: stdout: "service/agnhost-replica created\n" May 4 16:17:36.250: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend May 4 16:17:36.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7504 create -f -' May 4 16:17:36.533: INFO: stderr: "" May 4 16:17:36.533: INFO: stdout: "service/agnhost-primary created\n" May 4 16:17:36.533: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 4 16:17:36.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7504 create -f -' May 4 16:17:36.779: INFO: stderr: "" May 4 16:17:36.779: INFO: stdout: "service/frontend created\n" May 4 16:17:36.779: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 4 16:17:36.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7504 create -f -' May 4 16:17:37.035: INFO: stderr: "" May 4 16:17:37.035: INFO: stdout: "deployment.apps/frontend created\n" May 4 16:17:37.035: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 4 16:17:37.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7504 create -f -' May 4 16:17:37.340: INFO: stderr: "" May 4 16:17:37.340: INFO: stdout: "deployment.apps/agnhost-primary created\n" May 4 16:17:37.340: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 4 16:17:37.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7504 create -f -' May 4 16:17:37.586: INFO: stderr: "" May 4 16:17:37.586: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app May 4 16:17:37.586: INFO: Waiting for all frontend pods to be Running. May 4 16:17:42.637: INFO: Waiting for frontend to serve content. May 4 16:17:42.646: INFO: Trying to add a new entry to the guestbook. May 4 16:17:43.655: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 4 16:17:48.665: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 4 16:17:48.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7504 delete --grace-period=0 --force -f -' May 4 16:17:48.801: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 4 16:17:48.802: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources May 4 16:17:48.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7504 delete --grace-period=0 --force -f -' May 4 16:17:48.951: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 4 16:17:48.951: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources May 4 16:17:48.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7504 delete --grace-period=0 --force -f -' May 4 16:17:49.086: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 4 16:17:49.086: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 4 16:17:49.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7504 delete --grace-period=0 --force -f -' May 4 16:17:49.204: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 4 16:17:49.204: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 4 16:17:49.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7504 delete --grace-period=0 --force -f -' May 4 16:17:49.340: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 4 16:17:49.340: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources May 4 16:17:49.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7504 delete --grace-period=0 --force -f -' May 4 16:17:49.481: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 4 16:17:49.481: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:17:49.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7504" for this suite. • [SLOW TEST:13.588 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342 should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":-1,"completed":24,"skipped":425,"failed":1,"failures":["[sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:16:56.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0504 16:17:36.702061 35 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:18:38.718: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. May 4 16:18:38.718: INFO: Deleting pod "simpletest.rc-6788j" in namespace "gc-7576" May 4 16:18:38.725: INFO: Deleting pod "simpletest.rc-d7kw2" in namespace "gc-7576" May 4 16:18:38.730: INFO: Deleting pod "simpletest.rc-jdsnx" in namespace "gc-7576" May 4 16:18:38.736: INFO: Deleting pod "simpletest.rc-kflds" in namespace "gc-7576" May 4 16:18:38.743: INFO: Deleting pod "simpletest.rc-kscv6" in namespace "gc-7576" May 4 16:18:38.749: INFO: Deleting pod "simpletest.rc-lfvlv" in namespace "gc-7576" May 4 16:18:38.757: INFO: Deleting pod "simpletest.rc-rbq26" in namespace "gc-7576" May 4 16:18:38.762: INFO: Deleting pod "simpletest.rc-sx2x5" in namespace "gc-7576" May 4 16:18:38.769: INFO: Deleting pod "simpletest.rc-w5k5v" in namespace "gc-7576" May 4 16:18:38.775: INFO: Deleting pod "simpletest.rc-w9qs8" in namespace "gc-7576" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:18:38.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7576" for this suite. • [SLOW TEST:102.142 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":20,"skipped":348,"failed":1,"failures":["[sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]"]} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:14:11.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-8443/configmap-test-26d32994-91eb-4674-9494-dcc12bff1ee1 STEP: Creating a pod to test consume configMaps May 4 16:14:12.016: INFO: Waiting up to 5m0s for pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59" in namespace "configmap-8443" to be "Succeeded or Failed" May 4 16:14:12.018: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007442ms May 4 16:14:14.022: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00605602s May 4 16:14:16.026: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009922284s May 4 16:14:18.030: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014125705s May 4 16:14:20.038: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 8.021412318s May 4 16:14:22.043: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 10.026552882s May 4 16:14:24.048: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 12.031325339s May 4 16:14:26.051: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 14.034754166s May 4 16:14:28.055: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 16.038406663s May 4 16:14:30.060: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 18.043195709s May 4 16:14:32.065: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 20.048331662s May 4 16:14:34.069: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 22.053177027s May 4 16:14:36.073: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 24.056875401s May 4 16:14:38.076: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 26.059839723s May 4 16:14:40.081: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 28.065061336s May 4 16:14:42.086: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 30.069475095s May 4 16:14:44.093: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 32.076306592s May 4 16:14:46.096: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 34.079386798s May 4 16:14:48.100: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 36.084085653s May 4 16:14:50.103: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 38.086798336s May 4 16:14:52.106: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 40.089836932s May 4 16:14:54.111: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 42.094456706s May 4 16:14:56.115: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 44.098200093s May 4 16:14:58.117: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 46.101054276s May 4 16:15:00.122: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 48.10541811s May 4 16:15:02.126: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 50.110095898s May 4 16:15:04.132: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 52.11531666s May 4 16:15:06.136: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 54.119589309s May 4 16:15:08.140: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 56.124043961s May 4 16:15:10.145: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 58.128304354s May 4 16:15:12.151: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.13440807s May 4 16:15:14.155: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.138688179s May 4 16:15:16.159: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.142423509s May 4 16:15:18.162: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.145292161s May 4 16:15:20.166: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.149695681s May 4 16:15:22.170: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.153387702s May 4 16:15:24.173: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.157049206s May 4 16:15:26.177: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.160878282s May 4 16:15:28.181: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.164329619s May 4 16:15:30.186: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.169674342s May 4 16:15:32.190: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.174137052s May 4 16:15:34.195: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.178568169s May 4 16:15:36.199: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.182674847s May 4 16:15:38.203: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.186798323s May 4 16:15:40.207: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.190525674s May 4 16:15:42.211: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.194368932s May 4 16:15:44.215: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.198607231s May 4 16:15:46.219: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.202314242s May 4 16:15:48.222: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.205201561s May 4 16:15:50.225: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.208924197s May 4 16:15:52.228: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.211463984s May 4 16:15:54.232: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.216042918s May 4 16:15:56.236: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.219840764s May 4 16:15:58.239: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.222680736s May 4 16:16:00.243: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.226793697s May 4 16:16:02.247: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.230697437s May 4 16:16:04.251: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.234783311s May 4 16:16:06.255: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.239040355s May 4 16:16:08.258: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.242074722s May 4 16:16:10.262: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.245629407s May 4 16:16:12.266: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.250068972s May 4 16:16:14.269: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.252928568s May 4 16:16:16.272: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.256163722s May 4 16:16:18.276: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.259694902s May 4 16:16:20.279: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.262464796s May 4 16:16:22.281: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.264817993s May 4 16:16:24.284: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.267428834s May 4 16:16:26.287: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.270584322s May 4 16:16:28.289: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.27306593s May 4 16:16:30.292: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.275492718s May 4 16:16:32.297: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.280496537s May 4 16:16:34.300: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.283744113s May 4 16:16:36.303: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.286811525s May 4 16:16:38.307: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.29069633s May 4 16:16:40.310: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.294179955s May 4 16:16:42.314: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.297558461s May 4 16:16:44.318: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.301823007s May 4 16:16:46.322: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.305311358s May 4 16:16:48.326: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.309480832s May 4 16:16:50.330: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.313584376s May 4 16:16:52.334: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.317254483s May 4 16:16:54.338: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.321364134s May 4 16:16:56.340: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.323645412s May 4 16:16:58.342: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.326022881s May 4 16:17:00.345: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.329136261s May 4 16:17:02.349: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.333108659s May 4 16:17:04.352: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.335452574s May 4 16:17:06.355: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.338875668s May 4 16:17:08.358: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.341641486s May 4 16:17:10.361: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.344350045s May 4 16:17:12.365: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.348183447s May 4 16:17:14.368: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.351990866s May 4 16:17:16.372: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.355440202s May 4 16:17:18.375: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.359058013s May 4 16:17:20.378: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.362089851s May 4 16:17:22.383: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.366572701s May 4 16:17:24.388: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.371558712s May 4 16:17:26.391: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.375068444s May 4 16:17:28.395: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.378934748s May 4 16:17:30.398: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.382098572s May 4 16:17:32.402: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.385570651s May 4 16:17:34.406: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.39005396s May 4 16:17:36.410: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.39366921s May 4 16:17:38.414: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.397352995s May 4 16:17:40.417: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.40031784s May 4 16:17:42.420: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.404035601s May 4 16:17:44.424: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.407360511s May 4 16:17:46.426: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.409841158s May 4 16:17:48.430: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.413364488s May 4 16:17:50.433: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.416681714s May 4 16:17:52.437: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.420199286s May 4 16:17:54.440: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.423514476s May 4 16:17:56.443: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.426457458s May 4 16:17:58.446: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.429845471s May 4 16:18:00.451: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.434270439s May 4 16:18:02.455: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.438664047s May 4 16:18:04.461: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.444938676s May 4 16:18:06.465: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.448423312s May 4 16:18:08.470: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.45348575s May 4 16:18:10.476: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.45958133s May 4 16:18:12.480: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.463265378s May 4 16:18:14.483: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.466879841s May 4 16:18:16.487: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.470794642s May 4 16:18:18.491: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.474194259s May 4 16:18:20.493: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.477053097s May 4 16:18:22.496: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.479782695s May 4 16:18:24.501: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.484785254s May 4 16:18:26.505: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.488662793s May 4 16:18:28.509: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.492665561s May 4 16:18:30.514: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.498113526s May 4 16:18:32.518: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.501319601s May 4 16:18:34.522: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.50583096s May 4 16:18:36.526: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.509253067s May 4 16:18:38.530: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.514072469s May 4 16:18:40.533: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.516949443s May 4 16:18:42.537: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.520436849s May 4 16:18:44.539: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.523157831s May 4 16:18:46.543: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.527111256s May 4 16:18:48.547: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.531066034s May 4 16:18:50.551: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.534993842s May 4 16:18:52.556: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.539424877s May 4 16:18:54.559: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.542860582s May 4 16:18:56.565: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.548340204s May 4 16:18:58.570: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.553460226s May 4 16:19:00.575: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.558431814s May 4 16:19:02.579: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.562222404s May 4 16:19:04.583: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.566430129s May 4 16:19:06.586: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.569566821s May 4 16:19:08.590: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.57400438s May 4 16:19:10.594: INFO: Pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.578170029s May 4 16:19:12.609: INFO: Failed to get logs from node "node2" pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59" container "env-test": the server rejected our request for an unknown reason (get pods pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59) STEP: delete the pod May 4 16:19:12.615: INFO: Waiting for pod pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59 to disappear May 4 16:19:12.617: INFO: Pod pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59 still exists May 4 16:19:14.621: INFO: Waiting for pod pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59 to disappear May 4 16:19:14.625: INFO: Pod pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59 no longer exists May 4 16:19:14.625: FAIL: Unexpected error: <*errors.errorString | 0xc004ef0ff0>: { s: "expected pod \"pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59\" success: Gave up after waiting 5m0s for pod \"pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59\" to be \"Succeeded or Failed\"", } expected pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59" success: Gave up after waiting 5m0s for pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59" to be "Succeeded or Failed" occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc00059e420, 0x4c18c6e, 0x12, 0xc003efa000, 0x0, 0xc00155f178, 0x6, 0x6, 0x4de7488) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725 +0x1ee k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:525 k8s.io/kubernetes/test/e2e/common.glob..func1.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:124 +0x954 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000179e00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc000179e00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc000179e00, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "configmap-8443". STEP: Found 9 events. May 4 16:19:14.631: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59: { } Scheduled: Successfully assigned configmap-8443/pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59 to node2 May 4 16:19:14.631: INFO: At 2021-05-04 16:14:13 +0000 UTC - event for pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59: {multus } AddedInterface: Add eth0 [10.244.3.200/24] May 4 16:19:14.631: INFO: At 2021-05-04 16:14:13 +0000 UTC - event for pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59: {kubelet node2} Pulling: Pulling image "docker.io/library/busybox:1.29" May 4 16:19:14.631: INFO: At 2021-05-04 16:14:14 +0000 UTC - event for pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59: {kubelet node2} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 4 16:19:14.631: INFO: At 2021-05-04 16:14:14 +0000 UTC - event for pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59: {kubelet node2} Failed: Error: ErrImagePull May 4 16:19:14.631: INFO: At 2021-05-04 16:14:15 +0000 UTC - event for pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59: {kubelet node2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 4 16:19:14.631: INFO: At 2021-05-04 16:14:16 +0000 UTC - event for pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59: {multus } AddedInterface: Add eth0 [10.244.3.201/24] May 4 16:19:14.631: INFO: At 2021-05-04 16:14:16 +0000 UTC - event for pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59: {kubelet node2} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 4 16:19:14.631: INFO: At 2021-05-04 16:14:16 +0000 UTC - event for pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59: {kubelet node2} Failed: Error: ImagePullBackOff May 4 16:19:14.634: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:19:14.634: INFO: May 4 16:19:14.638: INFO: Logging node info for node master1 May 4 16:19:14.640: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 db982204-549e-4532-90a7-a4410878cfc9 38618 0 2021-05-04 14:43:01 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"3e:f0:43:cb:66:52"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-04 14:51:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:46 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:19:08 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:19:08 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:19:08 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:19:08 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:88a0771919594d4187f6704fc7592bf8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8e0a253b-2aa4-4467-879e-567e7ba1ffa4,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:19:14.641: INFO: Logging kubelet events for node master1 May 4 16:19:14.643: INFO: Logging pods the kubelet thinks is on node master1 May 4 16:19:14.660: INFO: kube-flannel-qspzk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:19:14.660: INFO: Init container install-cni ready: true, restart count 0 May 4 16:19:14.660: INFO: Container kube-flannel ready: true, restart count 3 May 4 16:19:14.660: INFO: kube-multus-ds-amd64-jflvf started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.660: INFO: Container kube-multus ready: true, restart count 1 May 4 16:19:14.660: INFO: coredns-7677f9bb54-qvcd2 started at 2021-05-04 14:46:11 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.660: INFO: Container coredns ready: true, restart count 1 May 4 16:19:14.660: INFO: node-feature-discovery-controller-5bf5c49849-72rn6 started at 2021-05-04 14:51:52 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.660: INFO: Container nfd-controller ready: true, restart count 0 May 4 16:19:14.660: INFO: kube-apiserver-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.660: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:19:14.660: INFO: kube-controller-manager-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.660: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:19:14.660: INFO: kube-proxy-8j6ch started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.660: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:19:14.660: INFO: docker-registry-docker-registry-56cbc7bc58-zhf8t started at 2021-05-04 14:48:42 +0000 UTC (0+2 container statuses recorded) May 4 16:19:14.660: INFO: Container docker-registry ready: true, restart count 0 May 4 16:19:14.660: INFO: Container nginx ready: true, restart count 0 May 4 16:19:14.660: INFO: node-exporter-jckjs started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:19:14.660: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:19:14.660: INFO: Container node-exporter ready: true, restart count 0 May 4 16:19:14.660: INFO: kube-scheduler-master1 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.660: INFO: Container kube-scheduler ready: true, restart count 0 W0504 16:19:14.673944 34 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:19:14.707: INFO: Latency metrics for node master1 May 4 16:19:14.708: INFO: Logging node info for node master2 May 4 16:19:14.710: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 e2c15170-247b-4e7b-b818-abc807948bf8 38610 0 2021-05-04 14:43:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:e0:10:a0:e0:62"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:32 +0000 UTC,LastTransitionTime:2021-05-04 14:47:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:19:08 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:19:08 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:19:08 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:19:08 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6af568f56589422a9bd68e0270ce0f8c,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:bf27bb77-fad2-4b52-85c3-acb5113fc512,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:19:14.710: INFO: Logging kubelet events for node master2 May 4 16:19:14.712: INFO: Logging pods the kubelet thinks is on node master2 May 4 16:19:14.726: INFO: kube-apiserver-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.726: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:19:14.726: INFO: kube-controller-manager-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.726: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:19:14.726: INFO: kube-scheduler-master2 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.726: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:19:14.726: INFO: kube-proxy-6b5t8 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.726: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:19:14.726: INFO: kube-flannel-cxdfr started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:19:14.726: INFO: Init container install-cni ready: true, restart count 0 May 4 16:19:14.726: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:19:14.726: INFO: kube-multus-ds-amd64-dw8tg started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.726: INFO: Container kube-multus ready: true, restart count 1 May 4 16:19:14.726: INFO: dns-autoscaler-5b7b5c9b6f-zbrsq started at 2021-05-04 14:46:08 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.726: INFO: Container autoscaler ready: true, restart count 1 May 4 16:19:14.726: INFO: node-exporter-9c6qf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:19:14.726: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:19:14.726: INFO: Container node-exporter ready: true, restart count 0 W0504 16:19:14.739230 34 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:19:14.768: INFO: Latency metrics for node master2 May 4 16:19:14.768: INFO: Logging node info for node master3 May 4 16:19:14.771: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 b533a646-667e-403c-944b-71dec9cc4851 38601 0 2021-05-04 14:43:51 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:b0:53:14:f6:c9"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:48:50 +0000 UTC,LastTransitionTime:2021-05-04 14:48:50 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:19:08 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:19:08 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:19:08 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:19:08 +0000 UTC,LastTransitionTime:2021-05-04 14:45:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:959373dcda56494486f0c2bb0bb496cc,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:0714ca81-c21e-40d6-a288-48d597238e54,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:19:14.771: INFO: Logging kubelet events for node master3 May 4 16:19:14.774: INFO: Logging pods the kubelet thinks is on node master3 May 4 16:19:14.789: INFO: kube-scheduler-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.789: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:19:14.789: INFO: kube-proxy-2p5b6 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.789: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:19:14.789: INFO: kube-flannel-wznt8 started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:19:14.789: INFO: Init container install-cni ready: true, restart count 0 May 4 16:19:14.789: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:19:14.789: INFO: kube-multus-ds-amd64-cgwz2 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.789: INFO: Container kube-multus ready: true, restart count 1 May 4 16:19:14.789: INFO: coredns-7677f9bb54-pshfb started at 2021-05-04 14:46:06 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.789: INFO: Container coredns ready: true, restart count 1 May 4 16:19:14.789: INFO: node-exporter-wvppn started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:19:14.789: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:19:14.789: INFO: Container node-exporter ready: true, restart count 0 May 4 16:19:14.789: INFO: kube-apiserver-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.789: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:19:14.789: INFO: kube-controller-manager-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.789: INFO: Container kube-controller-manager ready: true, restart count 2 W0504 16:19:14.801231 34 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:19:14.828: INFO: Latency metrics for node master3 May 4 16:19:14.828: INFO: Logging node info for node node1 May 4 16:19:14.830: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 b8cf7e16-d5c7-4e2c-996a-93d93bd4fa1c 38634 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"e2:50:df:03:d2:13"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:54:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:19:12 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:19:12 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:19:12 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:19:12 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bffc023a4ab84df0b0181bc7b8f509e2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:dc08af72-abca-4f1d-bd0f-0e8d8eb97de5,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[@ :],SizeBytes:1002569035,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:371dc6bf7e0c7ce112a29341b000c40d840aef1dbb4fdcb3ae5c0597e28f3061 golang:alpine3.12],SizeBytes:301097267,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:19:14.831: INFO: Logging kubelet events for node node1 May 4 16:19:14.833: INFO: Logging pods the kubelet thinks is on node node1 May 4 16:19:14.855: INFO: kube-multus-ds-amd64-pkmbz started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.855: INFO: Container kube-multus ready: true, restart count 1 May 4 16:19:14.855: INFO: ss2-0 started at 2021-05-04 16:09:26 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.855: INFO: Container webserver ready: false, restart count 0 May 4 16:19:14.855: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.855: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 16:19:14.855: INFO: cmk-slg76 started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:19:14.855: INFO: Container nodereport ready: true, restart count 0 May 4 16:19:14.855: INFO: Container reconcile ready: true, restart count 0 May 4 16:19:14.855: INFO: prometheus-k8s-0 started at 2021-05-04 14:56:12 +0000 UTC (0+5 container statuses recorded) May 4 16:19:14.855: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:19:14.855: INFO: Container grafana ready: true, restart count 0 May 4 16:19:14.855: INFO: Container prometheus ready: true, restart count 1 May 4 16:19:14.855: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:19:14.855: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 4 16:19:14.855: INFO: kube-flannel-d6pbl started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:19:14.855: INFO: Init container install-cni ready: true, restart count 2 May 4 16:19:14.855: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:19:14.855: INFO: node-feature-discovery-worker-wfgl5 started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.855: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:19:14.855: INFO: prometheus-operator-5bb8cb9d8f-rrrhf started at 2021-05-04 14:56:03 +0000 UTC (0+2 container statuses recorded) May 4 16:19:14.855: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:19:14.855: INFO: Container prometheus-operator ready: true, restart count 0 May 4 16:19:14.855: INFO: node-exporter-k8qd9 started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:19:14.855: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:19:14.855: INFO: Container node-exporter ready: true, restart count 0 May 4 16:19:14.855: INFO: collectd-4755t started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:19:14.855: INFO: Container collectd ready: true, restart count 0 May 4 16:19:14.855: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:19:14.855: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:19:14.855: INFO: fail-once-local-ltx4r started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.855: INFO: Container c ready: false, restart count 0 May 4 16:19:14.855: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.855: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:19:14.856: INFO: server started at 2021-05-04 16:15:55 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.856: INFO: Container agnhost-container ready: true, restart count 0 May 4 16:19:14.856: INFO: nginx-proxy-node1 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.856: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:19:14.856: INFO: kube-proxy-t2mbn started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.856: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:19:14.856: INFO: liveness-http started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.856: INFO: Container liveness-http ready: false, restart count 17 May 4 16:19:14.856: INFO: server-envvars-e2e8d4b8-6525-4f40-9a98-8cccf5c227b4 started at 2021-05-04 16:10:40 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.856: INFO: Container srv ready: true, restart count 0 May 4 16:19:14.856: INFO: cmk-init-discover-node1-m8vvw started at 2021-05-04 14:54:32 +0000 UTC (0+3 container statuses recorded) May 4 16:19:14.856: INFO: Container discover ready: false, restart count 0 May 4 16:19:14.856: INFO: Container init ready: false, restart count 0 May 4 16:19:14.856: INFO: Container install ready: false, restart count 0 W0504 16:19:14.869539 34 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:19:14.903: INFO: Latency metrics for node node1 May 4 16:19:14.903: INFO: Logging node info for node node2 May 4 16:19:14.907: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 41567fa7-bb24-4381-9387-e4115195037d 38629 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:98:f5:3b:98:5c"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:19:10 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:19:10 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:19:10 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:19:10 +0000 UTC,LastTransitionTime:2021-05-04 14:45:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d47a1c7ab17f44f2ae7ff788700a8d74,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:df3523a1-a74f-4f8b-beb1-29f5ed8699f3,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:19:14.907: INFO: Logging kubelet events for node node2 May 4 16:19:14.909: INFO: Logging pods the kubelet thinks is on node node2 May 4 16:19:14.925: INFO: cmk-2fmbx started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:19:14.925: INFO: Container nodereport ready: true, restart count 0 May 4 16:19:14.925: INFO: Container reconcile ready: true, restart count 0 May 4 16:19:14.925: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.925: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 4 16:19:14.925: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.925: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:19:14.925: INFO: ss-0 started at 2021-05-04 16:17:34 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.925: INFO: Container webserver ready: false, restart count 0 May 4 16:19:14.925: INFO: ss2-1 started at 2021-05-04 16:08:40 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.925: INFO: Container webserver ready: true, restart count 0 May 4 16:19:14.925: INFO: kube-flannel-lnwkk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:19:14.925: INFO: Init container install-cni ready: true, restart count 2 May 4 16:19:14.925: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:19:14.925: INFO: cmk-webhook-6c9d5f8578-fr595 started at 2021-05-04 14:55:15 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.925: INFO: Container cmk-webhook ready: true, restart count 0 May 4 16:19:14.925: INFO: node-exporter-5lghf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:19:14.925: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:19:14.925: INFO: Container node-exporter ready: true, restart count 0 May 4 16:19:14.925: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x started at 2021-05-04 14:59:02 +0000 UTC (0+2 container statuses recorded) May 4 16:19:14.925: INFO: Container tas-controller ready: true, restart count 0 May 4 16:19:14.925: INFO: Container tas-extender ready: true, restart count 0 May 4 16:19:14.925: INFO: client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49 started at 2021-05-04 16:15:46 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.925: INFO: Container env3cont ready: false, restart count 0 May 4 16:19:14.925: INFO: liveness-exec started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.925: INFO: Container liveness-exec ready: false, restart count 6 May 4 16:19:14.925: INFO: kube-proxy-rfjjf started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.925: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:19:14.925: INFO: node-feature-discovery-worker-jzjqs started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.925: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:19:14.925: INFO: termination-message-container10779dbf-3a4f-48c1-86c0-3b0ea708da7c started at 2021-05-04 16:16:44 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.925: INFO: Container termination-message-container ready: false, restart count 0 May 4 16:19:14.925: INFO: pod-init-485103d2-8ff5-4cc8-93a4-a2bc5ba380ee started at 2021-05-04 16:17:49 +0000 UTC (2+1 container statuses recorded) May 4 16:19:14.925: INFO: Init container init1 ready: false, restart count 0 May 4 16:19:14.925: INFO: Init container init2 ready: false, restart count 0 May 4 16:19:14.925: INFO: Container run1 ready: false, restart count 0 May 4 16:19:14.925: INFO: kube-multus-ds-amd64-7r2s4 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.925: INFO: Container kube-multus ready: true, restart count 1 May 4 16:19:14.925: INFO: pod-update-58c100c1-80db-41aa-82d8-3e236dfc5b91 started at 2021-05-04 16:16:21 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.925: INFO: Container nginx ready: false, restart count 0 May 4 16:19:14.925: INFO: fail-once-local-bkr6m started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.925: INFO: Container c ready: false, restart count 0 May 4 16:19:14.925: INFO: cmk-init-discover-node2-zlxzj started at 2021-05-04 14:54:52 +0000 UTC (0+3 container statuses recorded) May 4 16:19:14.925: INFO: Container discover ready: false, restart count 0 May 4 16:19:14.925: INFO: Container init ready: false, restart count 0 May 4 16:19:14.925: INFO: Container install ready: false, restart count 0 May 4 16:19:14.925: INFO: collectd-dhwfp started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:19:14.925: INFO: Container collectd ready: true, restart count 0 May 4 16:19:14.925: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:19:14.925: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:19:14.925: INFO: pod-qos-class-767fad5e-8b4a-435d-87bf-4cb834c7a678 started at 2021-05-04 16:17:34 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.925: INFO: Container agnhost ready: true, restart count 0 May 4 16:19:14.925: INFO: e2e-test-httpd-pod started at 2021-05-04 16:11:06 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.925: INFO: Container e2e-test-httpd-pod ready: false, restart count 0 May 4 16:19:14.925: INFO: tester started at 2021-05-04 16:15:59 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.925: INFO: Container tester ready: false, restart count 0 May 4 16:19:14.925: INFO: nginx-proxy-node2 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:19:14.925: INFO: Container nginx-proxy ready: true, restart count 2 W0504 16:19:14.940786 34 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:19:14.970: INFO: Latency metrics for node node2 May 4 16:19:14.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8443" for this suite. • Failure [302.999 seconds] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via the environment [NodeConformance] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:19:14.625: Unexpected error: <*errors.errorString | 0xc004ef0ff0>: { s: "expected pod \"pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59\" success: Gave up after waiting 5m0s for pod \"pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59\" to be \"Succeeded or Failed\"", } expected pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59" success: Gave up after waiting 5m0s for pod "pod-configmaps-8d1ee320-8a2b-40a2-bfda-64f8d254cc59" to be "Succeeded or Failed" occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725 ------------------------------ {"msg":"FAILED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":131,"failed":3,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","[sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:19:15.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6505.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-6505.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6505.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6505.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-6505.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6505.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 4 16:19:21.093: INFO: DNS probes using dns-6505/dns-test-af973f1d-1270-4977-a294-f0ef0fca67ed succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:19:21.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6505" for this suite. • [SLOW TEST:6.095 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":151,"failed":3,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","[sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]"]} SSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:19:21.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 4 16:19:29.203: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 4 16:19:29.206: INFO: Pod pod-with-poststart-http-hook still exists May 4 16:19:31.206: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 4 16:19:31.210: INFO: Pod pod-with-poststart-http-hook still exists May 4 16:19:33.206: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 4 16:19:33.209: INFO: Pod pod-with-poststart-http-hook still exists May 4 16:19:35.206: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 4 16:19:35.209: INFO: Pod pod-with-poststart-http-hook still exists May 4 16:19:37.206: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 4 16:19:37.209: INFO: Pod pod-with-poststart-http-hook still exists May 4 16:19:39.206: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 4 16:19:39.209: INFO: Pod pod-with-poststart-http-hook still exists May 4 16:19:41.207: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 4 16:19:41.209: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:19:41.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1159" for this suite. • [SLOW TEST:20.089 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":157,"failed":3,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","[sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:19:41.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs May 4 16:19:41.272: INFO: Waiting up to 5m0s for pod "pod-1544bdd5-3dd1-4dbf-99b4-aff74df22be5" in namespace "emptydir-8659" to be "Succeeded or Failed" May 4 16:19:41.276: INFO: Pod "pod-1544bdd5-3dd1-4dbf-99b4-aff74df22be5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.345767ms May 4 16:19:43.280: INFO: Pod "pod-1544bdd5-3dd1-4dbf-99b4-aff74df22be5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007781396s May 4 16:19:45.283: INFO: Pod "pod-1544bdd5-3dd1-4dbf-99b4-aff74df22be5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011062286s STEP: Saw pod success May 4 16:19:45.283: INFO: Pod "pod-1544bdd5-3dd1-4dbf-99b4-aff74df22be5" satisfied condition "Succeeded or Failed" May 4 16:19:45.286: INFO: Trying to get logs from node node2 pod pod-1544bdd5-3dd1-4dbf-99b4-aff74df22be5 container test-container: STEP: delete the pod May 4 16:19:45.300: INFO: Waiting for pod pod-1544bdd5-3dd1-4dbf-99b4-aff74df22be5 to disappear May 4 16:19:45.302: INFO: Pod pod-1544bdd5-3dd1-4dbf-99b4-aff74df22be5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:19:45.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8659" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":161,"failed":3,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","[sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]"]} SSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:08:36.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-2620 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 4 16:08:36.281: INFO: Found 0 stateful pods, waiting for 3 May 4 16:08:46.285: INFO: Found 2 stateful pods, waiting for 3 May 4 16:08:56.285: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 4 16:08:56.285: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 4 16:08:56.286: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 4 16:08:56.313: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 4 16:09:06.339: INFO: Updating stateful set ss2 May 4 16:09:06.344: INFO: Waiting for Pod statefulset-2620/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 4 16:09:16.350: INFO: Waiting for Pod statefulset-2620/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 4 16:09:26.367: INFO: Found 1 stateful pods, waiting for 3 May 4 16:09:36.371: INFO: Found 2 stateful pods, waiting for 3 May 4 16:09:46.372: INFO: Found 2 stateful pods, waiting for 3 May 4 16:09:56.370: INFO: Found 2 stateful pods, waiting for 3 May 4 16:10:06.370: INFO: Found 2 stateful pods, waiting for 3 May 4 16:10:16.370: INFO: Found 2 stateful pods, waiting for 3 May 4 16:10:26.371: INFO: Found 2 stateful pods, waiting for 3 May 4 16:10:36.370: INFO: Found 2 stateful pods, waiting for 3 May 4 16:10:46.371: INFO: Found 2 stateful pods, waiting for 3 May 4 16:10:56.371: INFO: Found 2 stateful pods, waiting for 3 May 4 16:11:06.371: INFO: Found 2 stateful pods, waiting for 3 May 4 16:11:16.370: INFO: Found 2 stateful pods, waiting for 3 May 4 16:11:26.371: INFO: Found 2 stateful pods, waiting for 3 May 4 16:11:36.371: INFO: Found 2 stateful pods, waiting for 3 May 4 16:11:46.371: INFO: Found 2 stateful pods, waiting for 3 May 4 16:11:56.371: INFO: Found 2 stateful pods, waiting for 3 May 4 16:12:06.371: INFO: Found 2 stateful pods, waiting for 3 May 4 16:12:16.371: INFO: Found 2 stateful pods, waiting for 3 May 4 16:12:26.371: INFO: Found 2 stateful pods, waiting for 3 May 4 16:12:36.371: INFO: Found 2 stateful pods, waiting for 3 May 4 16:12:46.372: INFO: Found 2 stateful pods, waiting for 3 May 4 16:12:56.372: INFO: Found 2 stateful pods, waiting for 3 May 4 16:13:06.372: INFO: Found 2 stateful pods, waiting for 3 May 4 16:13:16.370: INFO: Found 2 stateful pods, waiting for 3 May 4 16:13:26.371: INFO: Found 2 stateful pods, waiting for 3 May 4 16:13:36.370: INFO: Found 2 stateful pods, waiting for 3 May 4 16:13:46.371: INFO: Found 2 stateful pods, waiting for 3 May 4 16:13:56.371: INFO: Found 2 stateful pods, waiting for 3 May 4 16:14:06.370: INFO: Found 2 stateful pods, waiting for 3 May 4 16:14:16.371: INFO: Found 2 stateful pods, waiting for 3 May 4 16:14:26.371: INFO: Found 2 stateful pods, waiting for 3 May 4 16:14:36.371: INFO: Found 2 stateful pods, waiting for 3 May 4 16:14:46.372: INFO: Found 2 stateful pods, waiting for 3 May 4 16:14:56.371: INFO: Found 2 stateful pods, waiting for 3 May 4 16:15:06.370: INFO: Found 2 stateful pods, waiting for 3 May 4 16:15:16.372: INFO: Found 2 stateful pods, waiting for 3 May 4 16:15:26.371: INFO: Found 2 stateful pods, waiting for 3 May 4 16:15:36.372: INFO: Found 2 stateful pods, waiting for 3 May 4 16:15:46.370: INFO: Found 2 stateful pods, waiting for 3 May 4 16:15:56.371: INFO: Found 2 stateful pods, waiting for 3 May 4 16:16:06.371: INFO: Found 2 stateful pods, waiting for 3 May 4 16:16:16.371: INFO: Found 2 stateful pods, waiting for 3 May 4 16:16:26.370: INFO: Found 2 stateful pods, waiting for 3 May 4 16:16:36.371: INFO: Found 2 stateful pods, waiting for 3 May 4 16:16:46.370: INFO: Found 2 stateful pods, waiting for 3 May 4 16:16:56.371: INFO: Found 2 stateful pods, waiting for 3 May 4 16:17:06.371: INFO: Found 2 stateful pods, waiting for 3 May 4 16:17:16.371: INFO: Found 2 stateful pods, waiting for 3 May 4 16:17:26.370: INFO: Found 2 stateful pods, waiting for 3 May 4 16:17:36.370: INFO: Found 2 stateful pods, waiting for 3 May 4 16:17:46.371: INFO: Found 2 stateful pods, waiting for 3 May 4 16:17:56.371: INFO: Found 2 stateful pods, waiting for 3 May 4 16:18:06.371: INFO: Found 2 stateful pods, waiting for 3 May 4 16:18:16.372: INFO: Found 2 stateful pods, waiting for 3 May 4 16:18:26.372: INFO: Found 2 stateful pods, waiting for 3 May 4 16:18:36.371: INFO: Found 2 stateful pods, waiting for 3 May 4 16:18:46.371: INFO: Found 2 stateful pods, waiting for 3 May 4 16:18:56.372: INFO: Found 2 stateful pods, waiting for 3 May 4 16:19:06.372: INFO: Found 2 stateful pods, waiting for 3 May 4 16:19:16.371: INFO: Found 2 stateful pods, waiting for 3 May 4 16:19:26.371: INFO: Found 2 stateful pods, waiting for 3 May 4 16:19:26.374: INFO: Found 2 stateful pods, waiting for 3 May 4 16:19:26.374: FAIL: Failed waiting for pods to enter running: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning(0x54075e0, 0xc003de51e0, 0x300000003, 0xc000238f00) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:58 +0x10e k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.8() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:419 +0x23fe k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000703c80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc000703c80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc000703c80, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 4 16:19:26.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2620 describe po ss2-0' May 4 16:19:26.550: INFO: stderr: "" May 4 16:19:26.550: INFO: stdout: "Name: ss2-0\nNamespace: statefulset-2620\nPriority: 0\nNode: node1/10.10.190.207\nStart Time: Tue, 04 May 2021 16:09:26 +0000\nLabels: baz=blah\n controller-revision-hash=ss2-65c7964b94\n foo=bar\n statefulset.kubernetes.io/pod-name=ss2-0\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.134\"\n ],\n \"mac\": \"d2:69:45:e7:c6:e7\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.134\"\n ],\n \"mac\": \"d2:69:45:e7:c6:e7\",\n \"default\": true,\n \"dns\": {}\n }]\n kubernetes.io/psp: collectd\nStatus: Pending\nIP: 10.244.4.134\nIPs:\n IP: 10.244.4.134\nControlled By: StatefulSet/ss2\nContainers:\n webserver:\n Container ID: \n Image: docker.io/library/httpd:2.4.38-alpine\n Image ID: \n Port: \n Host Port: \n State: Waiting\n Reason: ImagePullBackOff\n Ready: False\n Restart Count: 0\n Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-q9w54 (ro)\nConditions:\n Type Status\n Initialized True \n Ready False \n ContainersReady False \n PodScheduled True \nVolumes:\n default-token-q9w54:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-q9w54\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 10m default-scheduler Successfully assigned statefulset-2620/ss2-0 to node1\n Normal AddedInterface 9m58s multus Add eth0 [10.244.4.134/24]\n Normal Pulling 8m34s (x4 over 9m58s) kubelet Pulling image \"docker.io/library/httpd:2.4.38-alpine\"\n Warning Failed 8m33s (x4 over 9m57s) kubelet Failed to pull image \"docker.io/library/httpd:2.4.38-alpine\": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\n Warning Failed 8m33s (x4 over 9m57s) kubelet Error: ErrImagePull\n Normal BackOff 8m9s (x6 over 9m56s) kubelet Back-off pulling image \"docker.io/library/httpd:2.4.38-alpine\"\n Warning Failed 4m57s (x20 over 9m56s) kubelet Error: ImagePullBackOff\n" May 4 16:19:26.551: INFO: Output of kubectl describe ss2-0: Name: ss2-0 Namespace: statefulset-2620 Priority: 0 Node: node1/10.10.190.207 Start Time: Tue, 04 May 2021 16:09:26 +0000 Labels: baz=blah controller-revision-hash=ss2-65c7964b94 foo=bar statefulset.kubernetes.io/pod-name=ss2-0 Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.134" ], "mac": "d2:69:45:e7:c6:e7", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.134" ], "mac": "d2:69:45:e7:c6:e7", "default": true, "dns": {} }] kubernetes.io/psp: collectd Status: Pending IP: 10.244.4.134 IPs: IP: 10.244.4.134 Controlled By: StatefulSet/ss2 Containers: webserver: Container ID: Image: docker.io/library/httpd:2.4.38-alpine Image ID: Port: Host Port: State: Waiting Reason: ImagePullBackOff Ready: False Restart Count: 0 Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-q9w54 (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: default-token-q9w54: Type: Secret (a volume populated by a Secret) SecretName: default-token-q9w54 Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 10m default-scheduler Successfully assigned statefulset-2620/ss2-0 to node1 Normal AddedInterface 9m58s multus Add eth0 [10.244.4.134/24] Normal Pulling 8m34s (x4 over 9m58s) kubelet Pulling image "docker.io/library/httpd:2.4.38-alpine" Warning Failed 8m33s (x4 over 9m57s) kubelet Failed to pull image "docker.io/library/httpd:2.4.38-alpine": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Warning Failed 8m33s (x4 over 9m57s) kubelet Error: ErrImagePull Normal BackOff 8m9s (x6 over 9m56s) kubelet Back-off pulling image "docker.io/library/httpd:2.4.38-alpine" Warning Failed 4m57s (x20 over 9m56s) kubelet Error: ImagePullBackOff May 4 16:19:26.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2620 logs ss2-0 --tail=100' May 4 16:19:26.698: INFO: rc: 1 May 4 16:19:26.698: INFO: Last 100 log lines of ss2-0: May 4 16:19:26.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2620 describe po ss2-1' May 4 16:19:26.856: INFO: stderr: "" May 4 16:19:26.856: INFO: stdout: "Name: ss2-1\nNamespace: statefulset-2620\nPriority: 0\nNode: node2/10.10.190.208\nStart Time: Tue, 04 May 2021 16:08:40 +0000\nLabels: baz=blah\n controller-revision-hash=ss2-65c7964b94\n foo=bar\n statefulset.kubernetes.io/pod-name=ss2-1\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.154\"\n ],\n \"mac\": \"1e:5c:cf:b0:a2:7d\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.154\"\n ],\n \"mac\": \"1e:5c:cf:b0:a2:7d\",\n \"default\": true,\n \"dns\": {}\n }]\n kubernetes.io/psp: collectd\nStatus: Running\nIP: 10.244.3.154\nIPs:\n IP: 10.244.3.154\nControlled By: StatefulSet/ss2\nContainers:\n webserver:\n Container ID: docker://7bab0ba11ab732e840d6f0493ed1f4167478c134b3e0a9e1a14958214b23474f\n Image: docker.io/library/httpd:2.4.38-alpine\n Image ID: docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\n Port: \n Host Port: \n State: Running\n Started: Tue, 04 May 2021 16:08:46 +0000\n Ready: True\n Restart Count: 0\n Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-q9w54 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-q9w54:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-q9w54\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 10m default-scheduler Successfully assigned statefulset-2620/ss2-1 to node2\n Normal AddedInterface 10m multus Add eth0 [10.244.3.154/24]\n Normal Pulling 10m kubelet Pulling image \"docker.io/library/httpd:2.4.38-alpine\"\n Normal Pulled 10m kubelet Successfully pulled image \"docker.io/library/httpd:2.4.38-alpine\" in 2.738518524s\n Normal Created 10m kubelet Created container webserver\n Normal Started 10m kubelet Started container webserver\n" May 4 16:19:26.857: INFO: Output of kubectl describe ss2-1: Name: ss2-1 Namespace: statefulset-2620 Priority: 0 Node: node2/10.10.190.208 Start Time: Tue, 04 May 2021 16:08:40 +0000 Labels: baz=blah controller-revision-hash=ss2-65c7964b94 foo=bar statefulset.kubernetes.io/pod-name=ss2-1 Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.154" ], "mac": "1e:5c:cf:b0:a2:7d", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.154" ], "mac": "1e:5c:cf:b0:a2:7d", "default": true, "dns": {} }] kubernetes.io/psp: collectd Status: Running IP: 10.244.3.154 IPs: IP: 10.244.3.154 Controlled By: StatefulSet/ss2 Containers: webserver: Container ID: docker://7bab0ba11ab732e840d6f0493ed1f4167478c134b3e0a9e1a14958214b23474f Image: docker.io/library/httpd:2.4.38-alpine Image ID: docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 Port: Host Port: State: Running Started: Tue, 04 May 2021 16:08:46 +0000 Ready: True Restart Count: 0 Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-q9w54 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-q9w54: Type: Secret (a volume populated by a Secret) SecretName: default-token-q9w54 Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 10m default-scheduler Successfully assigned statefulset-2620/ss2-1 to node2 Normal AddedInterface 10m multus Add eth0 [10.244.3.154/24] Normal Pulling 10m kubelet Pulling image "docker.io/library/httpd:2.4.38-alpine" Normal Pulled 10m kubelet Successfully pulled image "docker.io/library/httpd:2.4.38-alpine" in 2.738518524s Normal Created 10m kubelet Created container webserver Normal Started 10m kubelet Started container webserver May 4 16:19:26.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2620 logs ss2-1 --tail=100' May 4 16:19:27.021: INFO: stderr: "" May 4 16:19:27.021: INFO: stdout: "10.244.3.1 - - [04/May/2021:16:17:47 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:17:48 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:17:49 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:17:50 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:17:51 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:17:52 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:17:53 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:17:54 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:17:55 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:17:56 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:17:57 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:17:58 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:17:59 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:00 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:01 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:02 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:03 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:04 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:05 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:06 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:07 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:08 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:09 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:10 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:11 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:12 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:13 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:14 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:15 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:16 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:17 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:18 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:19 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:20 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:21 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:22 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:23 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:24 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:25 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:26 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:27 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:28 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:29 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:30 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:31 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:32 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:33 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:34 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:35 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:36 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:37 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:38 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:39 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:40 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:41 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:42 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:43 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:44 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:45 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:46 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:47 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:48 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:49 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:50 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:51 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:52 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:53 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:54 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:55 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:56 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:57 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:58 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:18:59 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:19:00 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:19:01 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:19:02 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:19:03 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:19:04 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:19:05 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:19:06 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:19:07 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:19:08 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:19:09 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:19:10 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:19:11 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:19:12 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:19:13 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:19:14 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:19:15 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:19:16 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:19:17 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:19:18 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:19:19 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:19:20 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:19:21 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:19:22 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:19:23 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:19:24 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:19:25 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.3.1 - - [04/May/2021:16:19:26 +0000] \"GET /index.html HTTP/1.1\" 200 45\n" May 4 16:19:27.022: INFO: Last 100 log lines of ss2-1: 10.244.3.1 - - [04/May/2021:16:17:47 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:17:48 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:17:49 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:17:50 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:17:51 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:17:52 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:17:53 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:17:54 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:17:55 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:17:56 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:17:57 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:17:58 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:17:59 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:00 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:01 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:02 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:03 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:04 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:05 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:06 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:07 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:08 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:09 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:10 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:11 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:12 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:13 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:14 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:15 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:16 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:17 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:18 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:19 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:20 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:21 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:22 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:23 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:24 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:25 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:26 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:27 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:28 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:29 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:30 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:31 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:32 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:33 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:34 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:35 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:36 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:37 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:38 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:39 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:40 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:41 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:42 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:43 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:44 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:45 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:46 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:47 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:48 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:49 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:50 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:51 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:52 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:53 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:54 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:55 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:56 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:57 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:58 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:18:59 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:19:00 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:19:01 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:19:02 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:19:03 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:19:04 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:19:05 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:19:06 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:19:07 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:19:08 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:19:09 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:19:10 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:19:11 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:19:12 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:19:13 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:19:14 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:19:15 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:19:16 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:19:17 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:19:18 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:19:19 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:19:20 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:19:21 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:19:22 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:19:23 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:19:24 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:19:25 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.3.1 - - [04/May/2021:16:19:26 +0000] "GET /index.html HTTP/1.1" 200 45 May 4 16:19:27.022: INFO: Deleting all statefulset in ns statefulset-2620 May 4 16:19:27.025: INFO: Scaling statefulset ss2 to 0 May 4 16:19:47.039: INFO: Waiting for statefulset status.replicas updated to 0 May 4 16:19:47.042: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "statefulset-2620". STEP: Found 45 events. May 4 16:19:47.057: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for ss2-0: { } Scheduled: Successfully assigned statefulset-2620/ss2-0 to node1 May 4 16:19:47.058: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for ss2-0: { } Scheduled: Successfully assigned statefulset-2620/ss2-0 to node1 May 4 16:19:47.058: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for ss2-1: { } Scheduled: Successfully assigned statefulset-2620/ss2-1 to node2 May 4 16:19:47.058: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for ss2-2: { } Scheduled: Successfully assigned statefulset-2620/ss2-2 to node2 May 4 16:19:47.058: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for ss2-2: { } Scheduled: Successfully assigned statefulset-2620/ss2-2 to node2 May 4 16:19:47.058: INFO: At 2021-05-04 16:08:36 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-0 in StatefulSet ss2 successful May 4 16:19:47.058: INFO: At 2021-05-04 16:08:37 +0000 UTC - event for ss2-0: {kubelet node1} Pulling: Pulling image "docker.io/library/httpd:2.4.38-alpine" May 4 16:19:47.058: INFO: At 2021-05-04 16:08:37 +0000 UTC - event for ss2-0: {multus } AddedInterface: Add eth0 [10.244.4.120/24] May 4 16:19:47.058: INFO: At 2021-05-04 16:08:39 +0000 UTC - event for ss2-0: {kubelet node1} Pulled: Successfully pulled image "docker.io/library/httpd:2.4.38-alpine" in 1.612959536s May 4 16:19:47.058: INFO: At 2021-05-04 16:08:39 +0000 UTC - event for ss2-0: {kubelet node1} Created: Created container webserver May 4 16:19:47.058: INFO: At 2021-05-04 16:08:40 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-1 in StatefulSet ss2 successful May 4 16:19:47.058: INFO: At 2021-05-04 16:08:40 +0000 UTC - event for ss2-0: {kubelet node1} Started: Started container webserver May 4 16:19:47.058: INFO: At 2021-05-04 16:08:42 +0000 UTC - event for ss2-1: {multus } AddedInterface: Add eth0 [10.244.3.154/24] May 4 16:19:47.058: INFO: At 2021-05-04 16:08:42 +0000 UTC - event for ss2-1: {kubelet node2} Pulling: Pulling image "docker.io/library/httpd:2.4.38-alpine" May 4 16:19:47.058: INFO: At 2021-05-04 16:08:45 +0000 UTC - event for ss2-1: {kubelet node2} Pulled: Successfully pulled image "docker.io/library/httpd:2.4.38-alpine" in 2.738518524s May 4 16:19:47.058: INFO: At 2021-05-04 16:08:46 +0000 UTC - event for ss2-1: {kubelet node2} Created: Created container webserver May 4 16:19:47.058: INFO: At 2021-05-04 16:08:46 +0000 UTC - event for ss2-1: {kubelet node2} Started: Started container webserver May 4 16:19:47.058: INFO: At 2021-05-04 16:08:47 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-2 in StatefulSet ss2 successful May 4 16:19:47.058: INFO: At 2021-05-04 16:08:49 +0000 UTC - event for ss2-2: {kubelet node2} Pulling: Pulling image "docker.io/library/httpd:2.4.38-alpine" May 4 16:19:47.058: INFO: At 2021-05-04 16:08:49 +0000 UTC - event for ss2-2: {multus } AddedInterface: Add eth0 [10.244.3.157/24] May 4 16:19:47.058: INFO: At 2021-05-04 16:08:52 +0000 UTC - event for ss2-2: {kubelet node2} Pulled: Successfully pulled image "docker.io/library/httpd:2.4.38-alpine" in 2.717698991s May 4 16:19:47.058: INFO: At 2021-05-04 16:08:52 +0000 UTC - event for ss2-2: {kubelet node2} Started: Started container webserver May 4 16:19:47.058: INFO: At 2021-05-04 16:08:52 +0000 UTC - event for ss2-2: {kubelet node2} Created: Created container webserver May 4 16:19:47.058: INFO: At 2021-05-04 16:09:06 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulDelete: delete Pod ss2-2 in StatefulSet ss2 successful May 4 16:19:47.058: INFO: At 2021-05-04 16:09:06 +0000 UTC - event for ss2-2: {kubelet node2} Killing: Stopping container webserver May 4 16:19:47.058: INFO: At 2021-05-04 16:09:06 +0000 UTC - event for ss2-2: {kubelet node2} Unhealthy: Readiness probe failed: Get "http://10.244.3.157:80/index.html": dial tcp 10.244.3.157:80: connect: connection refused May 4 16:19:47.058: INFO: At 2021-05-04 16:09:08 +0000 UTC - event for ss2-2: {kubelet node2} Unhealthy: Readiness probe failed: Get "http://10.244.3.157:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers) May 4 16:19:47.058: INFO: At 2021-05-04 16:09:21 +0000 UTC - event for ss2-2: {multus } AddedInterface: Add eth0 [10.244.3.169/24] May 4 16:19:47.058: INFO: At 2021-05-04 16:09:21 +0000 UTC - event for ss2-2: {kubelet node2} Pulling: Pulling image "docker.io/library/httpd:2.4.39-alpine" May 4 16:19:47.058: INFO: At 2021-05-04 16:09:22 +0000 UTC - event for ss2-2: {kubelet node2} Failed: Failed to pull image "docker.io/library/httpd:2.4.39-alpine": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 4 16:19:47.058: INFO: At 2021-05-04 16:09:22 +0000 UTC - event for ss2-2: {kubelet node2} Failed: Error: ErrImagePull May 4 16:19:47.058: INFO: At 2021-05-04 16:09:23 +0000 UTC - event for ss2-2: {kubelet node2} Failed: Error: ImagePullBackOff May 4 16:19:47.058: INFO: At 2021-05-04 16:09:23 +0000 UTC - event for ss2-2: {kubelet node2} BackOff: Back-off pulling image "docker.io/library/httpd:2.4.39-alpine" May 4 16:19:47.058: INFO: At 2021-05-04 16:09:26 +0000 UTC - event for ss2-0: {kubelet node1} Killing: Stopping container webserver May 4 16:19:47.058: INFO: At 2021-05-04 16:09:28 +0000 UTC - event for ss2-0: {kubelet node1} Pulling: Pulling image "docker.io/library/httpd:2.4.38-alpine" May 4 16:19:47.058: INFO: At 2021-05-04 16:09:28 +0000 UTC - event for ss2-0: {multus } AddedInterface: Add eth0 [10.244.4.134/24] May 4 16:19:47.058: INFO: At 2021-05-04 16:09:29 +0000 UTC - event for ss2-0: {kubelet node1} Failed: Error: ErrImagePull May 4 16:19:47.058: INFO: At 2021-05-04 16:09:29 +0000 UTC - event for ss2-0: {kubelet node1} Failed: Failed to pull image "docker.io/library/httpd:2.4.38-alpine": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 4 16:19:47.058: INFO: At 2021-05-04 16:09:30 +0000 UTC - event for ss2-0: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/httpd:2.4.38-alpine" May 4 16:19:47.058: INFO: At 2021-05-04 16:09:30 +0000 UTC - event for ss2-0: {kubelet node1} Failed: Error: ImagePullBackOff May 4 16:19:47.058: INFO: At 2021-05-04 16:19:27 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulDelete: delete Pod ss2-1 in StatefulSet ss2 successful May 4 16:19:47.058: INFO: At 2021-05-04 16:19:27 +0000 UTC - event for ss2-1: {kubelet node2} Unhealthy: Readiness probe failed: Get "http://10.244.3.154:80/index.html": dial tcp 10.244.3.154:80: connect: connection refused May 4 16:19:47.058: INFO: At 2021-05-04 16:19:27 +0000 UTC - event for ss2-1: {kubelet node2} Killing: Stopping container webserver May 4 16:19:47.058: INFO: At 2021-05-04 16:19:29 +0000 UTC - event for ss2-1: {kubelet node2} Unhealthy: Readiness probe failed: Get "http://10.244.3.154:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers) May 4 16:19:47.058: INFO: At 2021-05-04 16:19:39 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulDelete: delete Pod ss2-0 in StatefulSet ss2 successful May 4 16:19:47.060: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:19:47.060: INFO: May 4 16:19:47.065: INFO: Logging node info for node master1 May 4 16:19:47.067: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 db982204-549e-4532-90a7-a4410878cfc9 38871 0 2021-05-04 14:43:01 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"3e:f0:43:cb:66:52"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-04 14:51:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:46 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:19:39 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:19:39 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:19:39 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:19:39 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:88a0771919594d4187f6704fc7592bf8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8e0a253b-2aa4-4467-879e-567e7ba1ffa4,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:19:47.069: INFO: Logging kubelet events for node master1 May 4 16:19:47.072: INFO: Logging pods the kubelet thinks is on node master1 May 4 16:19:47.095: INFO: kube-flannel-qspzk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:19:47.095: INFO: Init container install-cni ready: true, restart count 0 May 4 16:19:47.095: INFO: Container kube-flannel ready: true, restart count 3 May 4 16:19:47.095: INFO: kube-multus-ds-amd64-jflvf started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.095: INFO: Container kube-multus ready: true, restart count 1 May 4 16:19:47.095: INFO: coredns-7677f9bb54-qvcd2 started at 2021-05-04 14:46:11 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.095: INFO: Container coredns ready: true, restart count 1 May 4 16:19:47.095: INFO: node-feature-discovery-controller-5bf5c49849-72rn6 started at 2021-05-04 14:51:52 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.095: INFO: Container nfd-controller ready: true, restart count 0 May 4 16:19:47.095: INFO: kube-scheduler-master1 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.095: INFO: Container kube-scheduler ready: true, restart count 0 May 4 16:19:47.095: INFO: kube-apiserver-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.095: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:19:47.095: INFO: kube-controller-manager-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.095: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:19:47.095: INFO: kube-proxy-8j6ch started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.095: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:19:47.095: INFO: docker-registry-docker-registry-56cbc7bc58-zhf8t started at 2021-05-04 14:48:42 +0000 UTC (0+2 container statuses recorded) May 4 16:19:47.095: INFO: Container docker-registry ready: true, restart count 0 May 4 16:19:47.095: INFO: Container nginx ready: true, restart count 0 May 4 16:19:47.095: INFO: node-exporter-jckjs started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:19:47.095: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:19:47.095: INFO: Container node-exporter ready: true, restart count 0 W0504 16:19:47.109220 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:19:47.135: INFO: Latency metrics for node master1 May 4 16:19:47.135: INFO: Logging node info for node master2 May 4 16:19:47.138: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 e2c15170-247b-4e7b-b818-abc807948bf8 38870 0 2021-05-04 14:43:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:e0:10:a0:e0:62"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:32 +0000 UTC,LastTransitionTime:2021-05-04 14:47:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:19:38 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:19:38 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:19:38 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:19:38 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6af568f56589422a9bd68e0270ce0f8c,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:bf27bb77-fad2-4b52-85c3-acb5113fc512,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:19:47.139: INFO: Logging kubelet events for node master2 May 4 16:19:47.140: INFO: Logging pods the kubelet thinks is on node master2 May 4 16:19:47.148: INFO: node-exporter-9c6qf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:19:47.148: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:19:47.148: INFO: Container node-exporter ready: true, restart count 0 May 4 16:19:47.148: INFO: kube-apiserver-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.148: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:19:47.148: INFO: kube-controller-manager-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.148: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:19:47.148: INFO: kube-scheduler-master2 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.148: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:19:47.148: INFO: kube-proxy-6b5t8 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.148: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:19:47.148: INFO: kube-flannel-cxdfr started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:19:47.148: INFO: Init container install-cni ready: true, restart count 0 May 4 16:19:47.148: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:19:47.148: INFO: kube-multus-ds-amd64-dw8tg started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.148: INFO: Container kube-multus ready: true, restart count 1 May 4 16:19:47.148: INFO: dns-autoscaler-5b7b5c9b6f-zbrsq started at 2021-05-04 14:46:08 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.148: INFO: Container autoscaler ready: true, restart count 1 W0504 16:19:47.163213 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:19:47.193: INFO: Latency metrics for node master2 May 4 16:19:47.193: INFO: Logging node info for node master3 May 4 16:19:47.196: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 b533a646-667e-403c-944b-71dec9cc4851 38865 0 2021-05-04 14:43:51 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:b0:53:14:f6:c9"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:48:50 +0000 UTC,LastTransitionTime:2021-05-04 14:48:50 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:19:38 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:19:38 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:19:38 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:19:38 +0000 UTC,LastTransitionTime:2021-05-04 14:45:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:959373dcda56494486f0c2bb0bb496cc,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:0714ca81-c21e-40d6-a288-48d597238e54,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:19:47.196: INFO: Logging kubelet events for node master3 May 4 16:19:47.198: INFO: Logging pods the kubelet thinks is on node master3 May 4 16:19:47.206: INFO: kube-scheduler-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.206: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:19:47.206: INFO: kube-proxy-2p5b6 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.206: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:19:47.206: INFO: kube-flannel-wznt8 started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:19:47.206: INFO: Init container install-cni ready: true, restart count 0 May 4 16:19:47.206: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:19:47.206: INFO: kube-multus-ds-amd64-cgwz2 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.206: INFO: Container kube-multus ready: true, restart count 1 May 4 16:19:47.206: INFO: coredns-7677f9bb54-pshfb started at 2021-05-04 14:46:06 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.206: INFO: Container coredns ready: true, restart count 1 May 4 16:19:47.206: INFO: node-exporter-wvppn started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:19:47.206: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:19:47.206: INFO: Container node-exporter ready: true, restart count 0 May 4 16:19:47.206: INFO: kube-apiserver-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.206: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:19:47.206: INFO: kube-controller-manager-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.206: INFO: Container kube-controller-manager ready: true, restart count 2 W0504 16:19:47.220980 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:19:47.254: INFO: Latency metrics for node master3 May 4 16:19:47.254: INFO: Logging node info for node node1 May 4 16:19:47.256: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 b8cf7e16-d5c7-4e2c-996a-93d93bd4fa1c 38907 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"e2:50:df:03:d2:13"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:54:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:19:42 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:19:42 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:19:42 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:19:42 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bffc023a4ab84df0b0181bc7b8f509e2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:dc08af72-abca-4f1d-bd0f-0e8d8eb97de5,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[@ :],SizeBytes:1002569035,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:371dc6bf7e0c7ce112a29341b000c40d840aef1dbb4fdcb3ae5c0597e28f3061 golang:alpine3.12],SizeBytes:301097267,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:19:47.257: INFO: Logging kubelet events for node node1 May 4 16:19:47.259: INFO: Logging pods the kubelet thinks is on node node1 May 4 16:19:47.275: INFO: node-exporter-k8qd9 started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:19:47.275: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:19:47.275: INFO: Container node-exporter ready: true, restart count 0 May 4 16:19:47.275: INFO: kube-flannel-d6pbl started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:19:47.275: INFO: Init container install-cni ready: true, restart count 2 May 4 16:19:47.275: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:19:47.275: INFO: node-feature-discovery-worker-wfgl5 started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.275: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:19:47.275: INFO: prometheus-operator-5bb8cb9d8f-rrrhf started at 2021-05-04 14:56:03 +0000 UTC (0+2 container statuses recorded) May 4 16:19:47.275: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:19:47.275: INFO: Container prometheus-operator ready: true, restart count 0 May 4 16:19:47.275: INFO: collectd-4755t started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:19:47.275: INFO: Container collectd ready: true, restart count 0 May 4 16:19:47.275: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:19:47.275: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:19:47.275: INFO: fail-once-local-ltx4r started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.275: INFO: Container c ready: false, restart count 0 May 4 16:19:47.275: INFO: server started at 2021-05-04 16:15:55 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.275: INFO: Container agnhost-container ready: true, restart count 0 May 4 16:19:47.275: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.275: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:19:47.275: INFO: server-envvars-e2e8d4b8-6525-4f40-9a98-8cccf5c227b4 started at 2021-05-04 16:10:40 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.275: INFO: Container srv ready: true, restart count 0 May 4 16:19:47.275: INFO: nginx-proxy-node1 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.275: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:19:47.275: INFO: kube-proxy-t2mbn started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.275: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:19:47.275: INFO: liveness-http started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.275: INFO: Container liveness-http ready: false, restart count 17 May 4 16:19:47.275: INFO: cmk-init-discover-node1-m8vvw started at 2021-05-04 14:54:32 +0000 UTC (0+3 container statuses recorded) May 4 16:19:47.275: INFO: Container discover ready: false, restart count 0 May 4 16:19:47.275: INFO: Container init ready: false, restart count 0 May 4 16:19:47.275: INFO: Container install ready: false, restart count 0 May 4 16:19:47.275: INFO: kube-multus-ds-amd64-pkmbz started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.275: INFO: Container kube-multus ready: true, restart count 1 May 4 16:19:47.275: INFO: pod-subpath-test-secret-xpkg started at 2021-05-04 16:19:45 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.275: INFO: Container test-container-subpath-secret-xpkg ready: false, restart count 0 May 4 16:19:47.275: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.275: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 16:19:47.275: INFO: cmk-slg76 started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:19:47.275: INFO: Container nodereport ready: true, restart count 0 May 4 16:19:47.275: INFO: Container reconcile ready: true, restart count 0 May 4 16:19:47.275: INFO: prometheus-k8s-0 started at 2021-05-04 14:56:12 +0000 UTC (0+5 container statuses recorded) May 4 16:19:47.275: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:19:47.275: INFO: Container grafana ready: true, restart count 0 May 4 16:19:47.275: INFO: Container prometheus ready: true, restart count 1 May 4 16:19:47.275: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:19:47.275: INFO: Container rules-configmap-reloader ready: true, restart count 0 W0504 16:19:47.290054 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:19:47.321: INFO: Latency metrics for node node1 May 4 16:19:47.321: INFO: Logging node info for node node2 May 4 16:19:47.324: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 41567fa7-bb24-4381-9387-e4115195037d 38890 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:98:f5:3b:98:5c"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:19:41 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:19:41 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:19:41 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:19:41 +0000 UTC,LastTransitionTime:2021-05-04 14:45:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d47a1c7ab17f44f2ae7ff788700a8d74,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:df3523a1-a74f-4f8b-beb1-29f5ed8699f3,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:19:47.325: INFO: Logging kubelet events for node node2 May 4 16:19:47.328: INFO: Logging pods the kubelet thinks is on node node2 May 4 16:19:47.344: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.344: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:19:47.344: INFO: cmk-2fmbx started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:19:47.344: INFO: Container nodereport ready: true, restart count 0 May 4 16:19:47.344: INFO: Container reconcile ready: true, restart count 0 May 4 16:19:47.344: INFO: pod-handle-http-request started at 2021-05-04 16:19:21 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.344: INFO: Container pod-handle-http-request ready: false, restart count 0 May 4 16:19:47.344: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.344: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 4 16:19:47.344: INFO: cmk-webhook-6c9d5f8578-fr595 started at 2021-05-04 14:55:15 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.344: INFO: Container cmk-webhook ready: true, restart count 0 May 4 16:19:47.344: INFO: ss-0 started at 2021-05-04 16:17:34 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.344: INFO: Container webserver ready: false, restart count 0 May 4 16:19:47.344: INFO: kube-flannel-lnwkk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:19:47.344: INFO: Init container install-cni ready: true, restart count 2 May 4 16:19:47.344: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:19:47.344: INFO: node-feature-discovery-worker-jzjqs started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.344: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:19:47.344: INFO: node-exporter-5lghf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:19:47.344: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:19:47.344: INFO: Container node-exporter ready: true, restart count 0 May 4 16:19:47.344: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x started at 2021-05-04 14:59:02 +0000 UTC (0+2 container statuses recorded) May 4 16:19:47.344: INFO: Container tas-controller ready: true, restart count 0 May 4 16:19:47.344: INFO: Container tas-extender ready: true, restart count 0 May 4 16:19:47.344: INFO: client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49 started at 2021-05-04 16:15:46 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.344: INFO: Container env3cont ready: false, restart count 0 May 4 16:19:47.344: INFO: liveness-exec started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.344: INFO: Container liveness-exec ready: false, restart count 6 May 4 16:19:47.344: INFO: kube-proxy-rfjjf started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.344: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:19:47.344: INFO: pod-update-58c100c1-80db-41aa-82d8-3e236dfc5b91 started at 2021-05-04 16:16:21 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.344: INFO: Container nginx ready: false, restart count 0 May 4 16:19:47.344: INFO: termination-message-container10779dbf-3a4f-48c1-86c0-3b0ea708da7c started at 2021-05-04 16:16:44 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.344: INFO: Container termination-message-container ready: false, restart count 0 May 4 16:19:47.344: INFO: pod-init-485103d2-8ff5-4cc8-93a4-a2bc5ba380ee started at 2021-05-04 16:17:49 +0000 UTC (2+1 container statuses recorded) May 4 16:19:47.344: INFO: Init container init1 ready: false, restart count 0 May 4 16:19:47.344: INFO: Init container init2 ready: false, restart count 0 May 4 16:19:47.344: INFO: Container run1 ready: false, restart count 0 May 4 16:19:47.344: INFO: kube-multus-ds-amd64-7r2s4 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.344: INFO: Container kube-multus ready: true, restart count 1 May 4 16:19:47.344: INFO: collectd-dhwfp started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:19:47.344: INFO: Container collectd ready: true, restart count 0 May 4 16:19:47.344: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:19:47.344: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:19:47.344: INFO: fail-once-local-bkr6m started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.344: INFO: Container c ready: false, restart count 0 May 4 16:19:47.344: INFO: cmk-init-discover-node2-zlxzj started at 2021-05-04 14:54:52 +0000 UTC (0+3 container statuses recorded) May 4 16:19:47.344: INFO: Container discover ready: false, restart count 0 May 4 16:19:47.344: INFO: Container init ready: false, restart count 0 May 4 16:19:47.344: INFO: Container install ready: false, restart count 0 May 4 16:19:47.344: INFO: e2e-test-httpd-pod started at 2021-05-04 16:11:06 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.344: INFO: Container e2e-test-httpd-pod ready: false, restart count 0 May 4 16:19:47.344: INFO: pod-qos-class-767fad5e-8b4a-435d-87bf-4cb834c7a678 started at 2021-05-04 16:17:34 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.344: INFO: Container agnhost ready: false, restart count 0 May 4 16:19:47.344: INFO: tester started at 2021-05-04 16:15:59 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.344: INFO: Container tester ready: false, restart count 0 May 4 16:19:47.344: INFO: nginx-proxy-node2 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:19:47.344: INFO: Container nginx-proxy ready: true, restart count 2 W0504 16:19:47.358720 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:19:47.390: INFO: Latency metrics for node node2 May 4 16:19:47.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2620" for this suite. • Failure [671.150 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:19:26.374: Failed waiting for pods to enter running: timed out waiting for the condition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:58 ------------------------------ {"msg":"FAILED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":25,"skipped":477,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:19:47.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-7e5778b5-5e69-4e96-97c7-c38333f80b0f STEP: Creating a pod to test consume secrets May 4 16:19:47.515: INFO: Waiting up to 5m0s for pod "pod-secrets-683ed405-a863-416a-9119-d4c6c1dae61b" in namespace "secrets-2520" to be "Succeeded or Failed" May 4 16:19:47.517: INFO: Pod "pod-secrets-683ed405-a863-416a-9119-d4c6c1dae61b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043674ms May 4 16:19:49.519: INFO: Pod "pod-secrets-683ed405-a863-416a-9119-d4c6c1dae61b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004680991s May 4 16:19:51.523: INFO: Pod "pod-secrets-683ed405-a863-416a-9119-d4c6c1dae61b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008712938s STEP: Saw pod success May 4 16:19:51.523: INFO: Pod "pod-secrets-683ed405-a863-416a-9119-d4c6c1dae61b" satisfied condition "Succeeded or Failed" May 4 16:19:51.526: INFO: Trying to get logs from node node1 pod pod-secrets-683ed405-a863-416a-9119-d4c6c1dae61b container secret-volume-test: STEP: delete the pod May 4 16:19:51.538: INFO: Waiting for pod pod-secrets-683ed405-a863-416a-9119-d4c6c1dae61b to disappear May 4 16:19:51.539: INFO: Pod pod-secrets-683ed405-a863-416a-9119-d4c6c1dae61b no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:19:51.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2520" for this suite. STEP: Destroying namespace "secret-namespace-8321" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":496,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:19:51.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:19:51.620: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 4 16:19:59.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1338 --namespace=crd-publish-openapi-1338 create -f -' May 4 16:19:59.918: INFO: stderr: "" May 4 16:19:59.918: INFO: stdout: "e2e-test-crd-publish-openapi-1398-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 4 16:19:59.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1338 --namespace=crd-publish-openapi-1338 delete e2e-test-crd-publish-openapi-1398-crds test-foo' May 4 16:20:00.079: INFO: stderr: "" May 4 16:20:00.079: INFO: stdout: "e2e-test-crd-publish-openapi-1398-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 4 16:20:00.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1338 --namespace=crd-publish-openapi-1338 apply -f -' May 4 16:20:00.368: INFO: stderr: "" May 4 16:20:00.368: INFO: stdout: "e2e-test-crd-publish-openapi-1398-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 4 16:20:00.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1338 --namespace=crd-publish-openapi-1338 delete e2e-test-crd-publish-openapi-1398-crds test-foo' May 4 16:20:00.536: INFO: stderr: "" May 4 16:20:00.536: INFO: stdout: "e2e-test-crd-publish-openapi-1398-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 4 16:20:00.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1338 --namespace=crd-publish-openapi-1338 create -f -' May 4 16:20:00.770: INFO: rc: 1 May 4 16:20:00.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1338 --namespace=crd-publish-openapi-1338 apply -f -' May 4 16:20:00.986: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 4 16:20:00.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1338 --namespace=crd-publish-openapi-1338 create -f -' May 4 16:20:01.218: INFO: rc: 1 May 4 16:20:01.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1338 --namespace=crd-publish-openapi-1338 apply -f -' May 4 16:20:01.436: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 4 16:20:01.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1338 explain e2e-test-crd-publish-openapi-1398-crds' May 4 16:20:01.735: INFO: stderr: "" May 4 16:20:01.735: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1398-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 4 16:20:01.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1338 explain e2e-test-crd-publish-openapi-1398-crds.metadata' May 4 16:20:02.020: INFO: stderr: "" May 4 16:20:02.020: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1398-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 4 16:20:02.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1338 explain e2e-test-crd-publish-openapi-1398-crds.spec' May 4 16:20:02.329: INFO: stderr: "" May 4 16:20:02.329: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1398-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 4 16:20:02.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1338 explain e2e-test-crd-publish-openapi-1398-crds.spec.bars' May 4 16:20:02.584: INFO: stderr: "" May 4 16:20:02.584: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1398-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 4 16:20:02.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1338 explain e2e-test-crd-publish-openapi-1398-crds.spec.bars2' May 4 16:20:02.869: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:20:05.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1338" for this suite. • [SLOW TEST:14.146 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":27,"skipped":526,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:19:45.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-xpkg STEP: Creating a pod to test atomic-volume-subpath May 4 16:19:45.369: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-xpkg" in namespace "subpath-5382" to be "Succeeded or Failed" May 4 16:19:45.373: INFO: Pod "pod-subpath-test-secret-xpkg": Phase="Pending", Reason="", readiness=false. Elapsed: 3.687709ms May 4 16:19:47.376: INFO: Pod "pod-subpath-test-secret-xpkg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006123667s May 4 16:19:49.379: INFO: Pod "pod-subpath-test-secret-xpkg": Phase="Running", Reason="", readiness=true. Elapsed: 4.009217237s May 4 16:19:51.381: INFO: Pod "pod-subpath-test-secret-xpkg": Phase="Running", Reason="", readiness=true. Elapsed: 6.0120187s May 4 16:19:53.384: INFO: Pod "pod-subpath-test-secret-xpkg": Phase="Running", Reason="", readiness=true. Elapsed: 8.014675804s May 4 16:19:55.387: INFO: Pod "pod-subpath-test-secret-xpkg": Phase="Running", Reason="", readiness=true. Elapsed: 10.017044264s May 4 16:19:57.389: INFO: Pod "pod-subpath-test-secret-xpkg": Phase="Running", Reason="", readiness=true. Elapsed: 12.019800134s May 4 16:19:59.392: INFO: Pod "pod-subpath-test-secret-xpkg": Phase="Running", Reason="", readiness=true. Elapsed: 14.022751476s May 4 16:20:01.396: INFO: Pod "pod-subpath-test-secret-xpkg": Phase="Running", Reason="", readiness=true. Elapsed: 16.026982135s May 4 16:20:03.400: INFO: Pod "pod-subpath-test-secret-xpkg": Phase="Running", Reason="", readiness=true. Elapsed: 18.030480227s May 4 16:20:05.403: INFO: Pod "pod-subpath-test-secret-xpkg": Phase="Running", Reason="", readiness=true. Elapsed: 20.033906344s May 4 16:20:07.407: INFO: Pod "pod-subpath-test-secret-xpkg": Phase="Running", Reason="", readiness=true. Elapsed: 22.037690778s May 4 16:20:09.410: INFO: Pod "pod-subpath-test-secret-xpkg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.04061056s STEP: Saw pod success May 4 16:20:09.410: INFO: Pod "pod-subpath-test-secret-xpkg" satisfied condition "Succeeded or Failed" May 4 16:20:09.412: INFO: Trying to get logs from node node1 pod pod-subpath-test-secret-xpkg container test-container-subpath-secret-xpkg: STEP: delete the pod May 4 16:20:09.426: INFO: Waiting for pod pod-subpath-test-secret-xpkg to disappear May 4 16:20:09.427: INFO: Pod pod-subpath-test-secret-xpkg no longer exists STEP: Deleting pod pod-subpath-test-secret-xpkg May 4 16:20:09.427: INFO: Deleting pod "pod-subpath-test-secret-xpkg" in namespace "subpath-5382" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:20:09.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5382" for this suite. • [SLOW TEST:24.113 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":5,"skipped":167,"failed":3,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","[sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]"]} SSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:20:05.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium May 4 16:20:05.811: INFO: Waiting up to 5m0s for pod "pod-52fc481e-6ddd-4531-aa8f-f2262f1107c9" in namespace "emptydir-3689" to be "Succeeded or Failed" May 4 16:20:05.813: INFO: Pod "pod-52fc481e-6ddd-4531-aa8f-f2262f1107c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.003781ms May 4 16:20:07.817: INFO: Pod "pod-52fc481e-6ddd-4531-aa8f-f2262f1107c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005696065s May 4 16:20:09.820: INFO: Pod "pod-52fc481e-6ddd-4531-aa8f-f2262f1107c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009281423s STEP: Saw pod success May 4 16:20:09.820: INFO: Pod "pod-52fc481e-6ddd-4531-aa8f-f2262f1107c9" satisfied condition "Succeeded or Failed" May 4 16:20:09.823: INFO: Trying to get logs from node node2 pod pod-52fc481e-6ddd-4531-aa8f-f2262f1107c9 container test-container: STEP: delete the pod May 4 16:20:09.836: INFO: Waiting for pod pod-52fc481e-6ddd-4531-aa8f-f2262f1107c9 to disappear May 4 16:20:09.837: INFO: Pod pod-52fc481e-6ddd-4531-aa8f-f2262f1107c9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:20:09.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3689" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":543,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:20:09.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium May 4 16:20:09.483: INFO: Waiting up to 5m0s for pod "pod-64185e40-c3ac-48df-96d9-26cf53707311" in namespace "emptydir-3662" to be "Succeeded or Failed" May 4 16:20:09.486: INFO: Pod "pod-64185e40-c3ac-48df-96d9-26cf53707311": Phase="Pending", Reason="", readiness=false. Elapsed: 3.505401ms May 4 16:20:11.489: INFO: Pod "pod-64185e40-c3ac-48df-96d9-26cf53707311": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006219772s May 4 16:20:13.492: INFO: Pod "pod-64185e40-c3ac-48df-96d9-26cf53707311": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009809234s STEP: Saw pod success May 4 16:20:13.492: INFO: Pod "pod-64185e40-c3ac-48df-96d9-26cf53707311" satisfied condition "Succeeded or Failed" May 4 16:20:13.495: INFO: Trying to get logs from node node1 pod pod-64185e40-c3ac-48df-96d9-26cf53707311 container test-container: STEP: delete the pod May 4 16:20:13.509: INFO: Waiting for pod pod-64185e40-c3ac-48df-96d9-26cf53707311 to disappear May 4 16:20:13.511: INFO: Pod pod-64185e40-c3ac-48df-96d9-26cf53707311 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:20:13.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3662" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":173,"failed":3,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","[sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:20:13.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 4 16:20:13.584: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1732 /api/v1/namespaces/watch-1732/configmaps/e2e-watch-test-label-changed 8678436c-3771-4a99-a06b-211bb4af72cc 39320 0 2021-05-04 16:20:13 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-05-04 16:20:13 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 4 16:20:13.585: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1732 /api/v1/namespaces/watch-1732/configmaps/e2e-watch-test-label-changed 8678436c-3771-4a99-a06b-211bb4af72cc 39321 0 2021-05-04 16:20:13 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-05-04 16:20:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 4 16:20:13.585: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1732 /api/v1/namespaces/watch-1732/configmaps/e2e-watch-test-label-changed 8678436c-3771-4a99-a06b-211bb4af72cc 39322 0 2021-05-04 16:20:13 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-05-04 16:20:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 4 16:20:23.614: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1732 /api/v1/namespaces/watch-1732/configmaps/e2e-watch-test-label-changed 8678436c-3771-4a99-a06b-211bb4af72cc 39422 0 2021-05-04 16:20:13 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-05-04 16:20:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 4 16:20:23.614: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1732 /api/v1/namespaces/watch-1732/configmaps/e2e-watch-test-label-changed 8678436c-3771-4a99-a06b-211bb4af72cc 39423 0 2021-05-04 16:20:13 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-05-04 16:20:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} May 4 16:20:23.614: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1732 /api/v1/namespaces/watch-1732/configmaps/e2e-watch-test-label-changed 8678436c-3771-4a99-a06b-211bb4af72cc 39424 0 2021-05-04 16:20:13 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-05-04 16:20:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:20:23.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1732" for this suite. • [SLOW TEST:10.079 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":7,"skipped":187,"failed":3,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","[sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]"]} S ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:20:09.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-3984 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-3984 STEP: creating replication controller externalsvc in namespace services-3984 I0504 16:20:09.950084 32 runners.go:190] Created replication controller with name: externalsvc, namespace: services-3984, replica count: 2 I0504 16:20:13.000723 32 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0504 16:20:16.001152 32 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 4 16:20:16.016: INFO: Creating new exec pod May 4 16:20:20.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3984 exec execpod2jxxc -- /bin/sh -x -c nslookup nodeport-service.services-3984.svc.cluster.local' May 4 16:20:20.317: INFO: stderr: "+ nslookup nodeport-service.services-3984.svc.cluster.local\n" May 4 16:20:20.317: INFO: stdout: "Server:\t\t10.233.0.3\nAddress:\t10.233.0.3#53\n\nnodeport-service.services-3984.svc.cluster.local\tcanonical name = externalsvc.services-3984.svc.cluster.local.\nName:\texternalsvc.services-3984.svc.cluster.local\nAddress: 10.233.39.127\n\n" STEP: deleting ReplicationController externalsvc in namespace services-3984, will wait for the garbage collector to delete the pods May 4 16:20:20.378: INFO: Deleting ReplicationController externalsvc took: 7.131302ms May 4 16:20:21.078: INFO: Terminating ReplicationController externalsvc pods took: 700.102106ms May 4 16:20:29.989: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:20:29.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3984" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:20.092 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":29,"skipped":579,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:20:30.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 4 16:20:30.043: INFO: Waiting up to 5m0s for pod "downwardapi-volume-783fc50b-3d0f-4dba-b13e-c285e9f42e6e" in namespace "downward-api-594" to be "Succeeded or Failed" May 4 16:20:30.046: INFO: Pod "downwardapi-volume-783fc50b-3d0f-4dba-b13e-c285e9f42e6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.812733ms May 4 16:20:32.050: INFO: Pod "downwardapi-volume-783fc50b-3d0f-4dba-b13e-c285e9f42e6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007067942s May 4 16:20:34.054: INFO: Pod "downwardapi-volume-783fc50b-3d0f-4dba-b13e-c285e9f42e6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011324207s STEP: Saw pod success May 4 16:20:34.054: INFO: Pod "downwardapi-volume-783fc50b-3d0f-4dba-b13e-c285e9f42e6e" satisfied condition "Succeeded or Failed" May 4 16:20:34.057: INFO: Trying to get logs from node node2 pod downwardapi-volume-783fc50b-3d0f-4dba-b13e-c285e9f42e6e container client-container: STEP: delete the pod May 4 16:20:34.071: INFO: Waiting for pod downwardapi-volume-783fc50b-3d0f-4dba-b13e-c285e9f42e6e to disappear May 4 16:20:34.074: INFO: Pod downwardapi-volume-783fc50b-3d0f-4dba-b13e-c285e9f42e6e no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:20:34.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-594" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":581,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]} SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:20:34.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:20:50.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-159" for this suite. • [SLOW TEST:16.117 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":31,"skipped":588,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:20:50.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1307 STEP: creating the pod May 4 16:20:50.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8273 create -f -' May 4 16:20:50.575: INFO: stderr: "" May 4 16:20:50.575: INFO: stdout: "pod/pause created\n" May 4 16:20:50.575: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 4 16:20:50.576: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-8273" to be "running and ready" May 4 16:20:50.579: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 3.083866ms May 4 16:20:52.582: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006170721s May 4 16:20:54.585: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.009270623s May 4 16:20:54.585: INFO: Pod "pause" satisfied condition "running and ready" May 4 16:20:54.585: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod May 4 16:20:54.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8273 label pods pause testing-label=testing-label-value' May 4 16:20:54.737: INFO: stderr: "" May 4 16:20:54.737: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 4 16:20:54.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8273 get pod pause -L testing-label' May 4 16:20:54.883: INFO: stderr: "" May 4 16:20:54.883: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 4 16:20:54.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8273 label pods pause testing-label-' May 4 16:20:55.035: INFO: stderr: "" May 4 16:20:55.035: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 4 16:20:55.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8273 get pod pause -L testing-label' May 4 16:20:55.178: INFO: stderr: "" May 4 16:20:55.178: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1313 STEP: using delete to clean up resources May 4 16:20:55.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8273 delete --grace-period=0 --force -f -' May 4 16:20:55.330: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 4 16:20:55.330: INFO: stdout: "pod \"pause\" force deleted\n" May 4 16:20:55.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8273 get rc,svc -l name=pause --no-headers' May 4 16:20:55.505: INFO: stderr: "No resources found in kubectl-8273 namespace.\n" May 4 16:20:55.505: INFO: stdout: "" May 4 16:20:55.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8273 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 4 16:20:55.673: INFO: stderr: "" May 4 16:20:55.673: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:20:55.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8273" for this suite. • [SLOW TEST:5.465 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1305 should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":-1,"completed":32,"skipped":588,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:20:55.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-5e7daa35-34b1-44ee-9447-268828584694 STEP: Creating a pod to test consume configMaps May 4 16:20:55.792: INFO: Waiting up to 5m0s for pod "pod-configmaps-aa9e4744-23e2-4705-a233-fbf95da5c910" in namespace "configmap-9446" to be "Succeeded or Failed" May 4 16:20:55.794: INFO: Pod "pod-configmaps-aa9e4744-23e2-4705-a233-fbf95da5c910": Phase="Pending", Reason="", readiness=false. Elapsed: 2.200617ms May 4 16:20:57.797: INFO: Pod "pod-configmaps-aa9e4744-23e2-4705-a233-fbf95da5c910": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004972201s May 4 16:20:59.800: INFO: Pod "pod-configmaps-aa9e4744-23e2-4705-a233-fbf95da5c910": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008228971s STEP: Saw pod success May 4 16:20:59.800: INFO: Pod "pod-configmaps-aa9e4744-23e2-4705-a233-fbf95da5c910" satisfied condition "Succeeded or Failed" May 4 16:20:59.802: INFO: Trying to get logs from node node1 pod pod-configmaps-aa9e4744-23e2-4705-a233-fbf95da5c910 container configmap-volume-test: STEP: delete the pod May 4 16:20:59.817: INFO: Waiting for pod pod-configmaps-aa9e4744-23e2-4705-a233-fbf95da5c910 to disappear May 4 16:20:59.819: INFO: Pod pod-configmaps-aa9e4744-23e2-4705-a233-fbf95da5c910 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:20:59.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9446" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":630,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:15:55.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-5816 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-5816 May 4 16:20:59.654: FAIL: waiting for tester pod to start Unexpected error: <*errors.errorString | 0xc0002c2200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/node.testPreStop(0x54075e0, 0xc000430b00, 0xc003230ca0, 0xc) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:107 +0x105d k8s.io/kubernetes/test/e2e/node.glob..func11.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 +0x4d k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001568300) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc001568300) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc001568300, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 STEP: Deleting the tester pod STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "prestop-5816". STEP: Found 16 events. May 4 16:20:59.669: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for server: { } Scheduled: Successfully assigned prestop-5816/server to node1 May 4 16:20:59.669: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for tester: { } Scheduled: Successfully assigned prestop-5816/tester to node2 May 4 16:20:59.669: INFO: At 2021-05-04 16:15:57 +0000 UTC - event for server: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.20" May 4 16:20:59.670: INFO: At 2021-05-04 16:15:57 +0000 UTC - event for server: {multus } AddedInterface: Add eth0 [10.244.4.151/24] May 4 16:20:59.670: INFO: At 2021-05-04 16:15:58 +0000 UTC - event for server: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.20" in 850.875284ms May 4 16:20:59.670: INFO: At 2021-05-04 16:15:58 +0000 UTC - event for server: {kubelet node1} Created: Created container agnhost-container May 4 16:20:59.670: INFO: At 2021-05-04 16:15:58 +0000 UTC - event for server: {kubelet node1} Started: Started container agnhost-container May 4 16:20:59.670: INFO: At 2021-05-04 16:16:01 +0000 UTC - event for tester: {kubelet node2} Pulling: Pulling image "docker.io/library/busybox:1.29" May 4 16:20:59.670: INFO: At 2021-05-04 16:16:01 +0000 UTC - event for tester: {multus } AddedInterface: Add eth0 [10.244.3.206/24] May 4 16:20:59.670: INFO: At 2021-05-04 16:16:02 +0000 UTC - event for tester: {kubelet node2} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 4 16:20:59.670: INFO: At 2021-05-04 16:16:02 +0000 UTC - event for tester: {kubelet node2} Failed: Error: ErrImagePull May 4 16:20:59.670: INFO: At 2021-05-04 16:16:03 +0000 UTC - event for tester: {kubelet node2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 4 16:20:59.670: INFO: At 2021-05-04 16:16:04 +0000 UTC - event for tester: {kubelet node2} Failed: Error: ImagePullBackOff May 4 16:20:59.670: INFO: At 2021-05-04 16:16:04 +0000 UTC - event for tester: {multus } AddedInterface: Add eth0 [10.244.3.207/24] May 4 16:20:59.670: INFO: At 2021-05-04 16:16:04 +0000 UTC - event for tester: {kubelet node2} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 4 16:20:59.670: INFO: At 2021-05-04 16:20:59 +0000 UTC - event for server: {kubelet node1} Killing: Stopping container agnhost-container May 4 16:20:59.672: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:20:59.672: INFO: tester node2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:15:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:15:59 +0000 UTC ContainersNotReady containers with unready status: [tester]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:15:59 +0000 UTC ContainersNotReady containers with unready status: [tester]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:15:59 +0000 UTC }] May 4 16:20:59.672: INFO: May 4 16:20:59.676: INFO: Logging node info for node master1 May 4 16:20:59.678: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 db982204-549e-4532-90a7-a4410878cfc9 39759 0 2021-05-04 14:43:01 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"3e:f0:43:cb:66:52"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-04 14:51:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:46 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:20:59 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:20:59 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:20:59 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:20:59 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:88a0771919594d4187f6704fc7592bf8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8e0a253b-2aa4-4467-879e-567e7ba1ffa4,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:20:59.679: INFO: Logging kubelet events for node master1 May 4 16:20:59.681: INFO: Logging pods the kubelet thinks is on node master1 May 4 16:20:59.690: INFO: coredns-7677f9bb54-qvcd2 started at 2021-05-04 14:46:11 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.690: INFO: Container coredns ready: true, restart count 1 May 4 16:20:59.690: INFO: node-feature-discovery-controller-5bf5c49849-72rn6 started at 2021-05-04 14:51:52 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.690: INFO: Container nfd-controller ready: true, restart count 0 May 4 16:20:59.690: INFO: kube-flannel-qspzk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:20:59.690: INFO: Init container install-cni ready: true, restart count 0 May 4 16:20:59.690: INFO: Container kube-flannel ready: true, restart count 3 May 4 16:20:59.690: INFO: kube-multus-ds-amd64-jflvf started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.690: INFO: Container kube-multus ready: true, restart count 1 May 4 16:20:59.690: INFO: docker-registry-docker-registry-56cbc7bc58-zhf8t started at 2021-05-04 14:48:42 +0000 UTC (0+2 container statuses recorded) May 4 16:20:59.690: INFO: Container docker-registry ready: true, restart count 0 May 4 16:20:59.690: INFO: Container nginx ready: true, restart count 0 May 4 16:20:59.690: INFO: node-exporter-jckjs started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:20:59.690: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:20:59.690: INFO: Container node-exporter ready: true, restart count 0 May 4 16:20:59.690: INFO: kube-scheduler-master1 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.690: INFO: Container kube-scheduler ready: true, restart count 0 May 4 16:20:59.690: INFO: kube-apiserver-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.690: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:20:59.690: INFO: kube-controller-manager-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.690: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:20:59.690: INFO: kube-proxy-8j6ch started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.690: INFO: Container kube-proxy ready: true, restart count 1 W0504 16:20:59.703803 30 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:20:59.727: INFO: Latency metrics for node master1 May 4 16:20:59.727: INFO: Logging node info for node master2 May 4 16:20:59.731: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 e2c15170-247b-4e7b-b818-abc807948bf8 39751 0 2021-05-04 14:43:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:e0:10:a0:e0:62"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:32 +0000 UTC,LastTransitionTime:2021-05-04 14:47:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:20:58 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:20:58 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:20:58 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:20:58 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6af568f56589422a9bd68e0270ce0f8c,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:bf27bb77-fad2-4b52-85c3-acb5113fc512,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:20:59.731: INFO: Logging kubelet events for node master2 May 4 16:20:59.733: INFO: Logging pods the kubelet thinks is on node master2 May 4 16:20:59.742: INFO: dns-autoscaler-5b7b5c9b6f-zbrsq started at 2021-05-04 14:46:08 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.742: INFO: Container autoscaler ready: true, restart count 1 May 4 16:20:59.742: INFO: node-exporter-9c6qf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:20:59.742: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:20:59.742: INFO: Container node-exporter ready: true, restart count 0 May 4 16:20:59.742: INFO: kube-apiserver-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.742: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:20:59.742: INFO: kube-controller-manager-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.742: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:20:59.742: INFO: kube-scheduler-master2 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.742: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:20:59.742: INFO: kube-proxy-6b5t8 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.742: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:20:59.742: INFO: kube-flannel-cxdfr started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:20:59.742: INFO: Init container install-cni ready: true, restart count 0 May 4 16:20:59.742: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:20:59.742: INFO: kube-multus-ds-amd64-dw8tg started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.742: INFO: Container kube-multus ready: true, restart count 1 W0504 16:20:59.755289 30 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:20:59.778: INFO: Latency metrics for node master2 May 4 16:20:59.778: INFO: Logging node info for node master3 May 4 16:20:59.780: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 b533a646-667e-403c-944b-71dec9cc4851 39750 0 2021-05-04 14:43:51 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:b0:53:14:f6:c9"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:48:50 +0000 UTC,LastTransitionTime:2021-05-04 14:48:50 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:20:58 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:20:58 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:20:58 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:20:58 +0000 UTC,LastTransitionTime:2021-05-04 14:45:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:959373dcda56494486f0c2bb0bb496cc,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:0714ca81-c21e-40d6-a288-48d597238e54,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:20:59.780: INFO: Logging kubelet events for node master3 May 4 16:20:59.783: INFO: Logging pods the kubelet thinks is on node master3 May 4 16:20:59.791: INFO: kube-flannel-wznt8 started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:20:59.791: INFO: Init container install-cni ready: true, restart count 0 May 4 16:20:59.791: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:20:59.791: INFO: kube-multus-ds-amd64-cgwz2 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.791: INFO: Container kube-multus ready: true, restart count 1 May 4 16:20:59.791: INFO: coredns-7677f9bb54-pshfb started at 2021-05-04 14:46:06 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.791: INFO: Container coredns ready: true, restart count 1 May 4 16:20:59.791: INFO: node-exporter-wvppn started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:20:59.791: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:20:59.791: INFO: Container node-exporter ready: true, restart count 0 May 4 16:20:59.791: INFO: kube-apiserver-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.791: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:20:59.791: INFO: kube-controller-manager-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.791: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:20:59.791: INFO: kube-scheduler-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.791: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:20:59.791: INFO: kube-proxy-2p5b6 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.791: INFO: Container kube-proxy ready: true, restart count 2 W0504 16:20:59.802806 30 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:20:59.831: INFO: Latency metrics for node master3 May 4 16:20:59.831: INFO: Logging node info for node node1 May 4 16:20:59.833: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 b8cf7e16-d5c7-4e2c-996a-93d93bd4fa1c 39692 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"e2:50:df:03:d2:13"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:54:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:20:52 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:20:52 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:20:52 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:20:52 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bffc023a4ab84df0b0181bc7b8f509e2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:dc08af72-abca-4f1d-bd0f-0e8d8eb97de5,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[@ :],SizeBytes:1002569035,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:371dc6bf7e0c7ce112a29341b000c40d840aef1dbb4fdcb3ae5c0597e28f3061 golang:alpine3.12],SizeBytes:301097267,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:20:59.834: INFO: Logging kubelet events for node node1 May 4 16:20:59.836: INFO: Logging pods the kubelet thinks is on node node1 May 4 16:20:59.850: INFO: nginx-proxy-node1 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.850: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:20:59.850: INFO: kube-proxy-t2mbn started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.850: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:20:59.850: INFO: liveness-http started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.850: INFO: Container liveness-http ready: true, restart count 18 May 4 16:20:59.850: INFO: server-envvars-e2e8d4b8-6525-4f40-9a98-8cccf5c227b4 started at 2021-05-04 16:10:40 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.850: INFO: Container srv ready: true, restart count 0 May 4 16:20:59.850: INFO: cmk-init-discover-node1-m8vvw started at 2021-05-04 14:54:32 +0000 UTC (0+3 container statuses recorded) May 4 16:20:59.850: INFO: Container discover ready: false, restart count 0 May 4 16:20:59.850: INFO: Container init ready: false, restart count 0 May 4 16:20:59.850: INFO: Container install ready: false, restart count 0 May 4 16:20:59.850: INFO: client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49 started at 2021-05-04 16:20:49 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.850: INFO: Container env3cont ready: false, restart count 0 May 4 16:20:59.850: INFO: kube-multus-ds-amd64-pkmbz started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.850: INFO: Container kube-multus ready: true, restart count 1 May 4 16:20:59.850: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.850: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 16:20:59.850: INFO: cmk-slg76 started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:20:59.850: INFO: Container nodereport ready: true, restart count 0 May 4 16:20:59.850: INFO: Container reconcile ready: true, restart count 0 May 4 16:20:59.850: INFO: prometheus-k8s-0 started at 2021-05-04 14:56:12 +0000 UTC (0+5 container statuses recorded) May 4 16:20:59.850: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:20:59.850: INFO: Container grafana ready: true, restart count 0 May 4 16:20:59.850: INFO: Container prometheus ready: true, restart count 1 May 4 16:20:59.850: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:20:59.850: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 4 16:20:59.850: INFO: kube-flannel-d6pbl started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:20:59.850: INFO: Init container install-cni ready: true, restart count 2 May 4 16:20:59.850: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:20:59.850: INFO: node-feature-discovery-worker-wfgl5 started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.850: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:20:59.850: INFO: prometheus-operator-5bb8cb9d8f-rrrhf started at 2021-05-04 14:56:03 +0000 UTC (0+2 container statuses recorded) May 4 16:20:59.850: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:20:59.851: INFO: Container prometheus-operator ready: true, restart count 0 May 4 16:20:59.851: INFO: node-exporter-k8qd9 started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:20:59.851: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:20:59.851: INFO: Container node-exporter ready: true, restart count 0 May 4 16:20:59.851: INFO: collectd-4755t started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:20:59.851: INFO: Container collectd ready: true, restart count 0 May 4 16:20:59.851: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:20:59.851: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:20:59.851: INFO: fail-once-local-ltx4r started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.851: INFO: Container c ready: false, restart count 0 May 4 16:20:59.851: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.851: INFO: Container kube-sriovdp ready: true, restart count 0 W0504 16:20:59.862245 30 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:20:59.904: INFO: Latency metrics for node node1 May 4 16:20:59.904: INFO: Logging node info for node node2 May 4 16:20:59.910: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 41567fa7-bb24-4381-9387-e4115195037d 39677 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:98:f5:3b:98:5c"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:20:51 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:20:51 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:20:51 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:20:51 +0000 UTC,LastTransitionTime:2021-05-04 14:45:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d47a1c7ab17f44f2ae7ff788700a8d74,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:df3523a1-a74f-4f8b-beb1-29f5ed8699f3,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:20:59.911: INFO: Logging kubelet events for node node2 May 4 16:20:59.919: INFO: Logging pods the kubelet thinks is on node node2 May 4 16:20:59.935: INFO: collectd-dhwfp started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:20:59.935: INFO: Container collectd ready: true, restart count 0 May 4 16:20:59.935: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:20:59.935: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:20:59.935: INFO: fail-once-local-bkr6m started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.935: INFO: Container c ready: false, restart count 0 May 4 16:20:59.935: INFO: cmk-init-discover-node2-zlxzj started at 2021-05-04 14:54:52 +0000 UTC (0+3 container statuses recorded) May 4 16:20:59.935: INFO: Container discover ready: false, restart count 0 May 4 16:20:59.935: INFO: Container init ready: false, restart count 0 May 4 16:20:59.935: INFO: Container install ready: false, restart count 0 May 4 16:20:59.935: INFO: e2e-test-httpd-pod started at 2021-05-04 16:11:06 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.935: INFO: Container e2e-test-httpd-pod ready: false, restart count 0 May 4 16:20:59.935: INFO: tester started at 2021-05-04 16:15:59 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.935: INFO: Container tester ready: false, restart count 0 May 4 16:20:59.935: INFO: nginx-proxy-node2 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.935: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:20:59.935: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.935: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:20:59.935: INFO: cmk-2fmbx started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:20:59.935: INFO: Container nodereport ready: true, restart count 0 May 4 16:20:59.935: INFO: Container reconcile ready: true, restart count 0 May 4 16:20:59.935: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.935: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 4 16:20:59.935: INFO: busybox-fe2bb9a9-1bbd-4e3b-bdc3-65746a06d3c0 started at 2021-05-04 16:20:23 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.935: INFO: Container busybox ready: false, restart count 0 May 4 16:20:59.935: INFO: cmk-webhook-6c9d5f8578-fr595 started at 2021-05-04 14:55:15 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.935: INFO: Container cmk-webhook ready: true, restart count 0 May 4 16:20:59.935: INFO: ss-0 started at 2021-05-04 16:17:34 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.935: INFO: Container webserver ready: false, restart count 0 May 4 16:20:59.935: INFO: kube-flannel-lnwkk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:20:59.935: INFO: Init container install-cni ready: true, restart count 2 May 4 16:20:59.935: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:20:59.935: INFO: node-feature-discovery-worker-jzjqs started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.935: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:20:59.935: INFO: node-exporter-5lghf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:20:59.935: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:20:59.935: INFO: Container node-exporter ready: true, restart count 0 May 4 16:20:59.936: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x started at 2021-05-04 14:59:02 +0000 UTC (0+2 container statuses recorded) May 4 16:20:59.936: INFO: Container tas-controller ready: true, restart count 0 May 4 16:20:59.936: INFO: Container tas-extender ready: true, restart count 0 May 4 16:20:59.936: INFO: liveness-exec started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.936: INFO: Container liveness-exec ready: false, restart count 6 May 4 16:20:59.936: INFO: kube-proxy-rfjjf started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.936: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:20:59.936: INFO: pod-update-58c100c1-80db-41aa-82d8-3e236dfc5b91 started at 2021-05-04 16:16:21 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.936: INFO: Container nginx ready: false, restart count 0 May 4 16:20:59.936: INFO: termination-message-container10779dbf-3a4f-48c1-86c0-3b0ea708da7c started at 2021-05-04 16:16:44 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.936: INFO: Container termination-message-container ready: false, restart count 0 May 4 16:20:59.936: INFO: pod-init-485103d2-8ff5-4cc8-93a4-a2bc5ba380ee started at 2021-05-04 16:17:49 +0000 UTC (2+1 container statuses recorded) May 4 16:20:59.936: INFO: Init container init1 ready: false, restart count 0 May 4 16:20:59.936: INFO: Init container init2 ready: false, restart count 0 May 4 16:20:59.936: INFO: Container run1 ready: false, restart count 0 May 4 16:20:59.936: INFO: kube-multus-ds-amd64-7r2s4 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:20:59.936: INFO: Container kube-multus ready: true, restart count 1 W0504 16:20:59.949421 30 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:20:59.986: INFO: Latency metrics for node node2 May 4 16:20:59.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-5816" for this suite. • Failure [304.406 seconds] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:20:59.654: waiting for tester pod to start Unexpected error: <*errors.errorString | 0xc0002c2200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:107 ------------------------------ {"msg":"FAILED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":-1,"completed":26,"skipped":421,"failed":2,"failures":["[sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","[k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:20:59.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:21:00.105: INFO: Checking APIGroup: apiregistration.k8s.io May 4 16:21:00.106: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 May 4 16:21:00.106: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] May 4 16:21:00.106: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 May 4 16:21:00.106: INFO: Checking APIGroup: extensions May 4 16:21:00.106: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 May 4 16:21:00.106: INFO: Versions found [{extensions/v1beta1 v1beta1}] May 4 16:21:00.106: INFO: extensions/v1beta1 matches extensions/v1beta1 May 4 16:21:00.106: INFO: Checking APIGroup: apps May 4 16:21:00.107: INFO: PreferredVersion.GroupVersion: apps/v1 May 4 16:21:00.107: INFO: Versions found [{apps/v1 v1}] May 4 16:21:00.107: INFO: apps/v1 matches apps/v1 May 4 16:21:00.107: INFO: Checking APIGroup: events.k8s.io May 4 16:21:00.108: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 May 4 16:21:00.108: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] May 4 16:21:00.108: INFO: events.k8s.io/v1 matches events.k8s.io/v1 May 4 16:21:00.108: INFO: Checking APIGroup: authentication.k8s.io May 4 16:21:00.108: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 May 4 16:21:00.108: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] May 4 16:21:00.108: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 May 4 16:21:00.108: INFO: Checking APIGroup: authorization.k8s.io May 4 16:21:00.109: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 May 4 16:21:00.109: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] May 4 16:21:00.109: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 May 4 16:21:00.109: INFO: Checking APIGroup: autoscaling May 4 16:21:00.109: INFO: PreferredVersion.GroupVersion: autoscaling/v1 May 4 16:21:00.109: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] May 4 16:21:00.109: INFO: autoscaling/v1 matches autoscaling/v1 May 4 16:21:00.109: INFO: Checking APIGroup: batch May 4 16:21:00.110: INFO: PreferredVersion.GroupVersion: batch/v1 May 4 16:21:00.110: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] May 4 16:21:00.110: INFO: batch/v1 matches batch/v1 May 4 16:21:00.110: INFO: Checking APIGroup: certificates.k8s.io May 4 16:21:00.111: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 May 4 16:21:00.111: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] May 4 16:21:00.111: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 May 4 16:21:00.111: INFO: Checking APIGroup: networking.k8s.io May 4 16:21:00.111: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 May 4 16:21:00.111: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] May 4 16:21:00.111: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 May 4 16:21:00.111: INFO: Checking APIGroup: policy May 4 16:21:00.112: INFO: PreferredVersion.GroupVersion: policy/v1beta1 May 4 16:21:00.112: INFO: Versions found [{policy/v1beta1 v1beta1}] May 4 16:21:00.112: INFO: policy/v1beta1 matches policy/v1beta1 May 4 16:21:00.112: INFO: Checking APIGroup: rbac.authorization.k8s.io May 4 16:21:00.113: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 May 4 16:21:00.113: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] May 4 16:21:00.113: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 May 4 16:21:00.113: INFO: Checking APIGroup: storage.k8s.io May 4 16:21:00.114: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 May 4 16:21:00.114: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] May 4 16:21:00.114: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 May 4 16:21:00.114: INFO: Checking APIGroup: admissionregistration.k8s.io May 4 16:21:00.115: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 May 4 16:21:00.115: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] May 4 16:21:00.115: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 May 4 16:21:00.115: INFO: Checking APIGroup: apiextensions.k8s.io May 4 16:21:00.116: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 May 4 16:21:00.116: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] May 4 16:21:00.116: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 May 4 16:21:00.116: INFO: Checking APIGroup: scheduling.k8s.io May 4 16:21:00.117: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 May 4 16:21:00.117: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] May 4 16:21:00.117: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 May 4 16:21:00.117: INFO: Checking APIGroup: coordination.k8s.io May 4 16:21:00.117: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 May 4 16:21:00.117: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] May 4 16:21:00.117: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 May 4 16:21:00.117: INFO: Checking APIGroup: node.k8s.io May 4 16:21:00.118: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1beta1 May 4 16:21:00.118: INFO: Versions found [{node.k8s.io/v1beta1 v1beta1}] May 4 16:21:00.118: INFO: node.k8s.io/v1beta1 matches node.k8s.io/v1beta1 May 4 16:21:00.118: INFO: Checking APIGroup: discovery.k8s.io May 4 16:21:00.119: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1beta1 May 4 16:21:00.119: INFO: Versions found [{discovery.k8s.io/v1beta1 v1beta1}] May 4 16:21:00.119: INFO: discovery.k8s.io/v1beta1 matches discovery.k8s.io/v1beta1 May 4 16:21:00.119: INFO: Checking APIGroup: intel.com May 4 16:21:00.120: INFO: PreferredVersion.GroupVersion: intel.com/v1 May 4 16:21:00.120: INFO: Versions found [{intel.com/v1 v1}] May 4 16:21:00.120: INFO: intel.com/v1 matches intel.com/v1 May 4 16:21:00.120: INFO: Checking APIGroup: k8s.cni.cncf.io May 4 16:21:00.121: INFO: PreferredVersion.GroupVersion: k8s.cni.cncf.io/v1 May 4 16:21:00.121: INFO: Versions found [{k8s.cni.cncf.io/v1 v1}] May 4 16:21:00.121: INFO: k8s.cni.cncf.io/v1 matches k8s.cni.cncf.io/v1 May 4 16:21:00.121: INFO: Checking APIGroup: monitoring.coreos.com May 4 16:21:00.123: INFO: PreferredVersion.GroupVersion: monitoring.coreos.com/v1 May 4 16:21:00.123: INFO: Versions found [{monitoring.coreos.com/v1 v1}] May 4 16:21:00.123: INFO: monitoring.coreos.com/v1 matches monitoring.coreos.com/v1 May 4 16:21:00.123: INFO: Checking APIGroup: telemetry.intel.com May 4 16:21:00.126: INFO: PreferredVersion.GroupVersion: telemetry.intel.com/v1alpha1 May 4 16:21:00.126: INFO: Versions found [{telemetry.intel.com/v1alpha1 v1alpha1}] May 4 16:21:00.126: INFO: telemetry.intel.com/v1alpha1 matches telemetry.intel.com/v1alpha1 May 4 16:21:00.126: INFO: Checking APIGroup: custom.metrics.k8s.io May 4 16:21:00.128: INFO: PreferredVersion.GroupVersion: custom.metrics.k8s.io/v1beta1 May 4 16:21:00.128: INFO: Versions found [{custom.metrics.k8s.io/v1beta1 v1beta1}] May 4 16:21:00.128: INFO: custom.metrics.k8s.io/v1beta1 matches custom.metrics.k8s.io/v1beta1 [AfterEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:21:00.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-6189" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":34,"skipped":651,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:21:00.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 4 16:21:00.217: INFO: Created pod &Pod{ObjectMeta:{dns-7606 dns-7606 /api/v1/namespaces/dns-7606/pods/dns-7606 c1327f45-2fed-4820-a015-f493c0d0429f 39791 0 2021-05-04 16:21:00 +0000 UTC map[] map[kubernetes.io/psp:collectd] [] [] [{e2e.test Update v1 2021-05-04 16:21:00 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qg626,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qg626,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qg626,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 4 16:21:00.219: INFO: The status of Pod dns-7606 is Pending, waiting for it to be Running (with Ready = true) May 4 16:21:02.222: INFO: The status of Pod dns-7606 is Pending, waiting for it to be Running (with Ready = true) May 4 16:21:04.223: INFO: The status of Pod dns-7606 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... May 4 16:21:04.223: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-7606 PodName:dns-7606 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 4 16:21:04.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Verifying customized DNS server is configured on pod... May 4 16:21:04.350: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-7606 PodName:dns-7606 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 4 16:21:04.350: INFO: >>> kubeConfig: /root/.kube/config May 4 16:21:04.465: INFO: Deleting pod dns-7606... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:21:04.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7606" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":35,"skipped":676,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:21:04.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:21:04.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-4836" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":36,"skipped":684,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:21:00.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-17964a78-9f11-4191-99e9-d909336579cf STEP: Creating the pod STEP: Updating configmap configmap-test-upd-17964a78-9f11-4191-99e9-d909336579cf STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:21:06.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2376" for this suite. • [SLOW TEST:6.188 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":423,"failed":2,"failures":["[sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","[k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:21:04.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:21:04.625: INFO: Creating deployment "test-recreate-deployment" May 4 16:21:04.629: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 4 16:21:04.634: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 4 16:21:06.640: INFO: Waiting deployment "test-recreate-deployment" to complete May 4 16:21:06.642: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755742064, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755742064, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755742064, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755742064, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-c96cf48f\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 16:21:08.645: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 4 16:21:08.653: INFO: Updating deployment test-recreate-deployment May 4 16:21:08.653: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 May 4 16:21:08.689: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-6952 /apis/apps/v1/namespaces/deployment-6952/deployments/test-recreate-deployment 1f25ad28-d028-4d43-8638-55ec0ab9e77f 39977 2 2021-05-04 16:21:04 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-05-04 16:21:08 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-05-04 16:21:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc007502b18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-05-04 16:21:08 +0000 UTC,LastTransitionTime:2021-05-04 16:21:08 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-f79dd4667" is progressing.,LastUpdateTime:2021-05-04 16:21:08 +0000 UTC,LastTransitionTime:2021-05-04 16:21:04 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 4 16:21:08.692: INFO: New ReplicaSet "test-recreate-deployment-f79dd4667" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-f79dd4667 deployment-6952 /apis/apps/v1/namespaces/deployment-6952/replicasets/test-recreate-deployment-f79dd4667 0dafdf80-7b4f-40af-ae32-51c8a2c63d25 39976 1 2021-05-04 16:21:08 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 1f25ad28-d028-4d43-8638-55ec0ab9e77f 0xc0037ab1d0 0xc0037ab1d1}] [] [{kube-controller-manager Update apps/v1 2021-05-04 16:21:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1f25ad28-d028-4d43-8638-55ec0ab9e77f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: f79dd4667,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0037ab248 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 4 16:21:08.692: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 4 16:21:08.692: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-c96cf48f deployment-6952 /apis/apps/v1/namespaces/deployment-6952/replicasets/test-recreate-deployment-c96cf48f 1a4070bd-1d5b-43ae-b3cc-cfddeb656c3e 39965 2 2021-05-04 16:21:04 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 1f25ad28-d028-4d43-8638-55ec0ab9e77f 0xc0037ab0df 0xc0037ab0f0}] [] [{kube-controller-manager Update apps/v1 2021-05-04 16:21:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1f25ad28-d028-4d43-8638-55ec0ab9e77f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: c96cf48f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0037ab168 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 4 16:21:08.695: INFO: Pod "test-recreate-deployment-f79dd4667-v5zsh" is not available: &Pod{ObjectMeta:{test-recreate-deployment-f79dd4667-v5zsh test-recreate-deployment-f79dd4667- deployment-6952 /api/v1/namespaces/deployment-6952/pods/test-recreate-deployment-f79dd4667-v5zsh abbb07e2-4d3e-4e1d-8558-1246a199956d 39978 0 2021-05-04 16:21:08 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-recreate-deployment-f79dd4667 0dafdf80-7b4f-40af-ae32-51c8a2c63d25 0xc00759d0ef 0xc00759d100}] [] [{kube-controller-manager Update v1 2021-05-04 16:21:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0dafdf80-7b4f-40af-ae32-51c8a2c63d25\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-05-04 16:21:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-snjsx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-snjsx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-snjsx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:21:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:21:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:21:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:21:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2021-05-04 16:21:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:21:08.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6952" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":37,"skipped":705,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:11:06.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1546 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 4 16:11:06.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8135 run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod' May 4 16:11:06.183: INFO: stderr: "" May 4 16:11:06.183: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running May 4 16:21:06.294: FAIL: Failed getting pod e2e-test-httpd-pod: Timeout while waiting for pods with labels "run=e2e-test-httpd-pod" to be running Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.glob..func1.27.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1567 +0xb6c k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0015fcd80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc0015fcd80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc0015fcd80, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1550 May 4 16:21:06.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8135 delete pods e2e-test-httpd-pod' May 4 16:21:09.731: INFO: stderr: "" May 4 16:21:09.731: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "kubectl-8135". STEP: Found 9 events. May 4 16:21:09.733: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for e2e-test-httpd-pod: { } Scheduled: Successfully assigned kubectl-8135/e2e-test-httpd-pod to node2 May 4 16:21:09.733: INFO: At 2021-05-04 16:11:07 +0000 UTC - event for e2e-test-httpd-pod: {multus } AddedInterface: Add eth0 [10.244.3.191/24] May 4 16:21:09.733: INFO: At 2021-05-04 16:11:07 +0000 UTC - event for e2e-test-httpd-pod: {kubelet node2} Pulling: Pulling image "docker.io/library/httpd:2.4.38-alpine" May 4 16:21:09.733: INFO: At 2021-05-04 16:11:08 +0000 UTC - event for e2e-test-httpd-pod: {kubelet node2} Failed: Failed to pull image "docker.io/library/httpd:2.4.38-alpine": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 4 16:21:09.733: INFO: At 2021-05-04 16:11:08 +0000 UTC - event for e2e-test-httpd-pod: {kubelet node2} Failed: Error: ErrImagePull May 4 16:21:09.733: INFO: At 2021-05-04 16:11:09 +0000 UTC - event for e2e-test-httpd-pod: {kubelet node2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 4 16:21:09.733: INFO: At 2021-05-04 16:11:10 +0000 UTC - event for e2e-test-httpd-pod: {multus } AddedInterface: Add eth0 [10.244.3.192/24] May 4 16:21:09.734: INFO: At 2021-05-04 16:11:10 +0000 UTC - event for e2e-test-httpd-pod: {kubelet node2} BackOff: Back-off pulling image "docker.io/library/httpd:2.4.38-alpine" May 4 16:21:09.734: INFO: At 2021-05-04 16:11:10 +0000 UTC - event for e2e-test-httpd-pod: {kubelet node2} Failed: Error: ImagePullBackOff May 4 16:21:09.735: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:21:09.735: INFO: May 4 16:21:09.739: INFO: Logging node info for node master1 May 4 16:21:09.742: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 db982204-549e-4532-90a7-a4410878cfc9 40018 0 2021-05-04 14:43:01 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"3e:f0:43:cb:66:52"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-04 14:51:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:46 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:09 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:09 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:09 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:21:09 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:88a0771919594d4187f6704fc7592bf8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8e0a253b-2aa4-4467-879e-567e7ba1ffa4,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:21:09.742: INFO: Logging kubelet events for node master1 May 4 16:21:09.744: INFO: Logging pods the kubelet thinks is on node master1 May 4 16:21:09.753: INFO: kube-flannel-qspzk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:21:09.753: INFO: Init container install-cni ready: true, restart count 0 May 4 16:21:09.753: INFO: Container kube-flannel ready: true, restart count 3 May 4 16:21:09.753: INFO: kube-multus-ds-amd64-jflvf started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:21:09.753: INFO: Container kube-multus ready: true, restart count 1 May 4 16:21:09.753: INFO: coredns-7677f9bb54-qvcd2 started at 2021-05-04 14:46:11 +0000 UTC (0+1 container statuses recorded) May 4 16:21:09.753: INFO: Container coredns ready: true, restart count 1 May 4 16:21:09.753: INFO: node-feature-discovery-controller-5bf5c49849-72rn6 started at 2021-05-04 14:51:52 +0000 UTC (0+1 container statuses recorded) May 4 16:21:09.753: INFO: Container nfd-controller ready: true, restart count 0 May 4 16:21:09.753: INFO: kube-apiserver-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:21:09.753: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:21:09.753: INFO: kube-controller-manager-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:21:09.753: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:21:09.753: INFO: kube-proxy-8j6ch started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:21:09.753: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:21:09.753: INFO: docker-registry-docker-registry-56cbc7bc58-zhf8t started at 2021-05-04 14:48:42 +0000 UTC (0+2 container statuses recorded) May 4 16:21:09.753: INFO: Container docker-registry ready: true, restart count 0 May 4 16:21:09.753: INFO: Container nginx ready: true, restart count 0 May 4 16:21:09.754: INFO: node-exporter-jckjs started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:21:09.754: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:21:09.754: INFO: Container node-exporter ready: true, restart count 0 May 4 16:21:09.754: INFO: kube-scheduler-master1 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:21:09.754: INFO: Container kube-scheduler ready: true, restart count 0 W0504 16:21:09.766030 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:21:09.796: INFO: Latency metrics for node master1 May 4 16:21:09.796: INFO: Logging node info for node master2 May 4 16:21:09.798: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 e2c15170-247b-4e7b-b818-abc807948bf8 39989 0 2021-05-04 14:43:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:e0:10:a0:e0:62"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:32 +0000 UTC,LastTransitionTime:2021-05-04 14:47:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:08 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:08 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:08 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:21:08 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6af568f56589422a9bd68e0270ce0f8c,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:bf27bb77-fad2-4b52-85c3-acb5113fc512,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:21:09.799: INFO: Logging kubelet events for node master2 May 4 16:21:09.801: INFO: Logging pods the kubelet thinks is on node master2 May 4 16:21:09.810: INFO: dns-autoscaler-5b7b5c9b6f-zbrsq started at 2021-05-04 14:46:08 +0000 UTC (0+1 container statuses recorded) May 4 16:21:09.810: INFO: Container autoscaler ready: true, restart count 1 May 4 16:21:09.810: INFO: node-exporter-9c6qf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:21:09.810: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:21:09.810: INFO: Container node-exporter ready: true, restart count 0 May 4 16:21:09.810: INFO: kube-apiserver-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:21:09.811: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:21:09.811: INFO: kube-controller-manager-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:21:09.811: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:21:09.811: INFO: kube-scheduler-master2 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:21:09.811: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:21:09.811: INFO: kube-proxy-6b5t8 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:21:09.811: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:21:09.811: INFO: kube-flannel-cxdfr started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:21:09.811: INFO: Init container install-cni ready: true, restart count 0 May 4 16:21:09.811: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:21:09.811: INFO: kube-multus-ds-amd64-dw8tg started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:21:09.811: INFO: Container kube-multus ready: true, restart count 1 W0504 16:21:09.824287 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:21:09.848: INFO: Latency metrics for node master2 May 4 16:21:09.848: INFO: Logging node info for node master3 May 4 16:21:09.851: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 b533a646-667e-403c-944b-71dec9cc4851 39956 0 2021-05-04 14:43:51 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:b0:53:14:f6:c9"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:48:50 +0000 UTC,LastTransitionTime:2021-05-04 14:48:50 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:08 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:08 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:08 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:21:08 +0000 UTC,LastTransitionTime:2021-05-04 14:45:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:959373dcda56494486f0c2bb0bb496cc,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:0714ca81-c21e-40d6-a288-48d597238e54,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:21:09.852: INFO: Logging kubelet events for node master3 May 4 16:21:09.854: INFO: Logging pods the kubelet thinks is on node master3 May 4 16:21:09.862: INFO: node-exporter-wvppn started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:21:09.862: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:21:09.862: INFO: Container node-exporter ready: true, restart count 0 May 4 16:21:09.862: INFO: kube-apiserver-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:21:09.862: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:21:09.862: INFO: kube-controller-manager-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:21:09.862: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:21:09.862: INFO: kube-scheduler-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:21:09.862: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:21:09.862: INFO: kube-proxy-2p5b6 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:21:09.862: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:21:09.862: INFO: kube-flannel-wznt8 started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:21:09.862: INFO: Init container install-cni ready: true, restart count 0 May 4 16:21:09.862: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:21:09.862: INFO: kube-multus-ds-amd64-cgwz2 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:21:09.862: INFO: Container kube-multus ready: true, restart count 1 May 4 16:21:09.862: INFO: coredns-7677f9bb54-pshfb started at 2021-05-04 14:46:06 +0000 UTC (0+1 container statuses recorded) May 4 16:21:09.862: INFO: Container coredns ready: true, restart count 1 W0504 16:21:09.875980 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:21:09.903: INFO: Latency metrics for node master3 May 4 16:21:09.904: INFO: Logging node info for node node1 May 4 16:21:09.907: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 b8cf7e16-d5c7-4e2c-996a-93d93bd4fa1c 39837 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"e2:50:df:03:d2:13"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:54:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:02 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:02 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:02 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:21:02 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bffc023a4ab84df0b0181bc7b8f509e2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:dc08af72-abca-4f1d-bd0f-0e8d8eb97de5,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[@ :],SizeBytes:1002569035,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:371dc6bf7e0c7ce112a29341b000c40d840aef1dbb4fdcb3ae5c0597e28f3061 golang:alpine3.12],SizeBytes:301097267,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:21:09.907: INFO: Logging kubelet events for node node1 May 4 16:21:09.909: INFO: Logging pods the kubelet thinks is on node node1 May 4 16:21:10.075: INFO: client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49 started at 2021-05-04 16:20:49 +0000 UTC (0+1 container statuses recorded) May 4 16:21:10.075: INFO: Container env3cont ready: false, restart count 0 May 4 16:21:10.075: INFO: kube-multus-ds-amd64-pkmbz started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:21:10.075: INFO: Container kube-multus ready: true, restart count 1 May 4 16:21:10.075: INFO: prometheus-k8s-0 started at 2021-05-04 14:56:12 +0000 UTC (0+5 container statuses recorded) May 4 16:21:10.075: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:21:10.075: INFO: Container grafana ready: true, restart count 0 May 4 16:21:10.075: INFO: Container prometheus ready: true, restart count 1 May 4 16:21:10.075: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:21:10.075: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 4 16:21:10.075: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:21:10.075: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 16:21:10.075: INFO: cmk-slg76 started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:21:10.075: INFO: Container nodereport ready: true, restart count 0 May 4 16:21:10.075: INFO: Container reconcile ready: true, restart count 0 May 4 16:21:10.075: INFO: prometheus-operator-5bb8cb9d8f-rrrhf started at 2021-05-04 14:56:03 +0000 UTC (0+2 container statuses recorded) May 4 16:21:10.075: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:21:10.075: INFO: Container prometheus-operator ready: true, restart count 0 May 4 16:21:10.075: INFO: node-exporter-k8qd9 started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:21:10.075: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:21:10.075: INFO: Container node-exporter ready: true, restart count 0 May 4 16:21:10.075: INFO: test-recreate-deployment-f79dd4667-v5zsh started at 2021-05-04 16:21:08 +0000 UTC (0+1 container statuses recorded) May 4 16:21:10.075: INFO: Container httpd ready: false, restart count 0 May 4 16:21:10.075: INFO: kube-flannel-d6pbl started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:21:10.075: INFO: Init container install-cni ready: true, restart count 2 May 4 16:21:10.075: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:21:10.075: INFO: node-feature-discovery-worker-wfgl5 started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:21:10.075: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:21:10.075: INFO: collectd-4755t started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:21:10.075: INFO: Container collectd ready: true, restart count 0 May 4 16:21:10.075: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:21:10.075: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:21:10.075: INFO: fail-once-local-ltx4r started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:21:10.075: INFO: Container c ready: false, restart count 0 May 4 16:21:10.075: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:21:10.075: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:21:10.075: INFO: liveness-http started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:21:10.075: INFO: Container liveness-http ready: true, restart count 18 May 4 16:21:10.075: INFO: server-envvars-e2e8d4b8-6525-4f40-9a98-8cccf5c227b4 started at 2021-05-04 16:10:40 +0000 UTC (0+1 container statuses recorded) May 4 16:21:10.075: INFO: Container srv ready: true, restart count 0 May 4 16:21:10.075: INFO: nginx-proxy-node1 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:21:10.075: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:21:10.075: INFO: kube-proxy-t2mbn started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:21:10.075: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:21:10.075: INFO: cmk-init-discover-node1-m8vvw started at 2021-05-04 14:54:32 +0000 UTC (0+3 container statuses recorded) May 4 16:21:10.075: INFO: Container discover ready: false, restart count 0 May 4 16:21:10.075: INFO: Container init ready: false, restart count 0 May 4 16:21:10.075: INFO: Container install ready: false, restart count 0 W0504 16:21:10.087861 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:21:10.202: INFO: Latency metrics for node node1 May 4 16:21:10.202: INFO: Logging node info for node node2 May 4 16:21:10.205: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 41567fa7-bb24-4381-9387-e4115195037d 39822 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:98:f5:3b:98:5c"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:01 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:01 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:01 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:21:01 +0000 UTC,LastTransitionTime:2021-05-04 14:45:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d47a1c7ab17f44f2ae7ff788700a8d74,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:df3523a1-a74f-4f8b-beb1-29f5ed8699f3,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:21:10.205: INFO: Logging kubelet events for node node2 May 4 16:21:10.208: INFO: Logging pods the kubelet thinks is on node node2 May 4 16:21:10.219: INFO: kube-multus-ds-amd64-7r2s4 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:21:10.219: INFO: Container kube-multus ready: true, restart count 1 May 4 16:21:10.219: INFO: pod-update-58c100c1-80db-41aa-82d8-3e236dfc5b91 started at 2021-05-04 16:16:21 +0000 UTC (0+1 container statuses recorded) May 4 16:21:10.219: INFO: Container nginx ready: false, restart count 0 May 4 16:21:10.219: INFO: termination-message-container10779dbf-3a4f-48c1-86c0-3b0ea708da7c started at 2021-05-04 16:16:44 +0000 UTC (0+1 container statuses recorded) May 4 16:21:10.219: INFO: Container termination-message-container ready: false, restart count 0 May 4 16:21:10.219: INFO: pod-init-485103d2-8ff5-4cc8-93a4-a2bc5ba380ee started at 2021-05-04 16:17:49 +0000 UTC (2+1 container statuses recorded) May 4 16:21:10.219: INFO: Init container init1 ready: false, restart count 0 May 4 16:21:10.219: INFO: Init container init2 ready: false, restart count 0 May 4 16:21:10.219: INFO: Container run1 ready: false, restart count 0 May 4 16:21:10.219: INFO: cmk-init-discover-node2-zlxzj started at 2021-05-04 14:54:52 +0000 UTC (0+3 container statuses recorded) May 4 16:21:10.219: INFO: Container discover ready: false, restart count 0 May 4 16:21:10.219: INFO: Container init ready: false, restart count 0 May 4 16:21:10.219: INFO: Container install ready: false, restart count 0 May 4 16:21:10.219: INFO: collectd-dhwfp started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:21:10.219: INFO: Container collectd ready: true, restart count 0 May 4 16:21:10.219: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:21:10.219: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:21:10.219: INFO: fail-once-local-bkr6m started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:21:10.219: INFO: Container c ready: false, restart count 0 May 4 16:21:10.219: INFO: nginx-proxy-node2 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:21:10.219: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:21:10.219: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:21:10.219: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 4 16:21:10.219: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:21:10.219: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:21:10.219: INFO: cmk-2fmbx started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:21:10.219: INFO: Container nodereport ready: true, restart count 0 May 4 16:21:10.219: INFO: Container reconcile ready: true, restart count 0 May 4 16:21:10.219: INFO: pod-configmaps-fe9c6a93-248b-4ce5-8506-2aeb932fe139 started at 2021-05-04 16:21:00 +0000 UTC (0+1 container statuses recorded) May 4 16:21:10.219: INFO: Container configmap-volume-test ready: true, restart count 0 May 4 16:21:10.219: INFO: busybox-fe2bb9a9-1bbd-4e3b-bdc3-65746a06d3c0 started at 2021-05-04 16:20:23 +0000 UTC (0+1 container statuses recorded) May 4 16:21:10.219: INFO: Container busybox ready: false, restart count 0 May 4 16:21:10.219: INFO: kube-flannel-lnwkk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:21:10.219: INFO: Init container install-cni ready: true, restart count 2 May 4 16:21:10.219: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:21:10.219: INFO: cmk-webhook-6c9d5f8578-fr595 started at 2021-05-04 14:55:15 +0000 UTC (0+1 container statuses recorded) May 4 16:21:10.219: INFO: Container cmk-webhook ready: true, restart count 0 May 4 16:21:10.219: INFO: ss-0 started at 2021-05-04 16:17:34 +0000 UTC (0+1 container statuses recorded) May 4 16:21:10.219: INFO: Container webserver ready: false, restart count 0 May 4 16:21:10.219: INFO: sample-webhook-deployment-cbccbf6bb-c25gl started at 2021-05-04 16:21:09 +0000 UTC (0+1 container statuses recorded) May 4 16:21:10.219: INFO: Container sample-webhook ready: false, restart count 0 May 4 16:21:10.219: INFO: liveness-exec started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:21:10.219: INFO: Container liveness-exec ready: false, restart count 6 May 4 16:21:10.219: INFO: kube-proxy-rfjjf started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:21:10.219: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:21:10.219: INFO: node-feature-discovery-worker-jzjqs started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:21:10.219: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:21:10.219: INFO: node-exporter-5lghf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:21:10.219: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:21:10.219: INFO: Container node-exporter ready: true, restart count 0 May 4 16:21:10.219: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x started at 2021-05-04 14:59:02 +0000 UTC (0+2 container statuses recorded) May 4 16:21:10.219: INFO: Container tas-controller ready: true, restart count 0 May 4 16:21:10.219: INFO: Container tas-extender ready: true, restart count 0 May 4 16:21:10.219: INFO: sample-webhook-deployment-cbccbf6bb-tmwlv started at 2021-05-04 16:21:06 +0000 UTC (0+1 container statuses recorded) May 4 16:21:10.219: INFO: Container sample-webhook ready: false, restart count 0 W0504 16:21:10.231041 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:21:10.353: INFO: Latency metrics for node node2 May 4 16:21:10.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8135" for this suite. • Failure [604.346 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1543 should update a single-container pod's image [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:21:06.294: Failed getting pod e2e-test-httpd-pod: Timeout while waiting for pods with labels "run=e2e-test-httpd-pod" to be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1567 ------------------------------ {"msg":"FAILED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":-1,"completed":29,"skipped":378,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:21:08.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication May 4 16:21:09.035: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 4 16:21:09.047: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 4 16:21:11.056: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755742069, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755742069, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755742069, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755742069, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 4 16:21:14.065: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 4 16:21:18.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=webhook-8227 attach --namespace=webhook-8227 to-be-attached-pod -i -c=container1' May 4 16:21:18.279: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:21:18.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8227" for this suite. STEP: Destroying namespace "webhook-8227-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.576 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":38,"skipped":724,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]"]} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:21:06.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 4 16:21:06.814: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created May 4 16:21:08.822: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755742066, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755742066, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755742066, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755742066, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 16:21:10.826: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755742066, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755742066, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755742066, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755742066, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 4 16:21:13.833: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:21:13.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5682-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:21:19.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1365" for this suite. STEP: Destroying namespace "webhook-1365-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.670 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":28,"skipped":470,"failed":2,"failures":["[sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","[k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]"]} S ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:16:21.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes May 4 16:21:21.101: FAIL: Unexpected error: <*errors.errorString | 0xc0002c2200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*PodClient).CreateSync(0xc004681e60, 0xc002b55c00, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:103 +0xfe k8s.io/kubernetes/test/e2e/common.glob..func18.4() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:369 +0x498 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002d4d800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc002d4d800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc002d4d800, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "pods-1586". STEP: Found 7 events. May 4 16:21:21.105: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-update-58c100c1-80db-41aa-82d8-3e236dfc5b91: { } Scheduled: Successfully assigned pods-1586/pod-update-58c100c1-80db-41aa-82d8-3e236dfc5b91 to node2 May 4 16:21:21.105: INFO: At 2021-05-04 16:16:22 +0000 UTC - event for pod-update-58c100c1-80db-41aa-82d8-3e236dfc5b91: {multus } AddedInterface: Add eth0 [10.244.3.209/24] May 4 16:21:21.105: INFO: At 2021-05-04 16:16:22 +0000 UTC - event for pod-update-58c100c1-80db-41aa-82d8-3e236dfc5b91: {kubelet node2} Pulling: Pulling image "docker.io/library/nginx:1.14-alpine" May 4 16:21:21.105: INFO: At 2021-05-04 16:16:23 +0000 UTC - event for pod-update-58c100c1-80db-41aa-82d8-3e236dfc5b91: {kubelet node2} Failed: Failed to pull image "docker.io/library/nginx:1.14-alpine": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 4 16:21:21.105: INFO: At 2021-05-04 16:16:23 +0000 UTC - event for pod-update-58c100c1-80db-41aa-82d8-3e236dfc5b91: {kubelet node2} Failed: Error: ErrImagePull May 4 16:21:21.105: INFO: At 2021-05-04 16:16:23 +0000 UTC - event for pod-update-58c100c1-80db-41aa-82d8-3e236dfc5b91: {kubelet node2} BackOff: Back-off pulling image "docker.io/library/nginx:1.14-alpine" May 4 16:21:21.105: INFO: At 2021-05-04 16:16:23 +0000 UTC - event for pod-update-58c100c1-80db-41aa-82d8-3e236dfc5b91: {kubelet node2} Failed: Error: ImagePullBackOff May 4 16:21:21.107: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:21:21.107: INFO: pod-update-58c100c1-80db-41aa-82d8-3e236dfc5b91 node2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:16:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:16:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:16:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:16:21 +0000 UTC }] May 4 16:21:21.107: INFO: May 4 16:21:21.111: INFO: Logging node info for node master1 May 4 16:21:21.114: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 db982204-549e-4532-90a7-a4410878cfc9 40215 0 2021-05-04 14:43:01 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"3e:f0:43:cb:66:52"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-04 14:51:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:46 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:19 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:19 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:19 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:21:19 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:88a0771919594d4187f6704fc7592bf8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8e0a253b-2aa4-4467-879e-567e7ba1ffa4,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:21:21.114: INFO: Logging kubelet events for node master1 May 4 16:21:21.117: INFO: Logging pods the kubelet thinks is on node master1 May 4 16:21:21.125: INFO: kube-flannel-qspzk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:21:21.125: INFO: Init container install-cni ready: true, restart count 0 May 4 16:21:21.125: INFO: Container kube-flannel ready: true, restart count 3 May 4 16:21:21.125: INFO: kube-multus-ds-amd64-jflvf started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.125: INFO: Container kube-multus ready: true, restart count 1 May 4 16:21:21.125: INFO: coredns-7677f9bb54-qvcd2 started at 2021-05-04 14:46:11 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.125: INFO: Container coredns ready: true, restart count 1 May 4 16:21:21.125: INFO: node-feature-discovery-controller-5bf5c49849-72rn6 started at 2021-05-04 14:51:52 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.125: INFO: Container nfd-controller ready: true, restart count 0 May 4 16:21:21.125: INFO: kube-controller-manager-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.125: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:21:21.125: INFO: kube-proxy-8j6ch started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.125: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:21:21.125: INFO: docker-registry-docker-registry-56cbc7bc58-zhf8t started at 2021-05-04 14:48:42 +0000 UTC (0+2 container statuses recorded) May 4 16:21:21.125: INFO: Container docker-registry ready: true, restart count 0 May 4 16:21:21.125: INFO: Container nginx ready: true, restart count 0 May 4 16:21:21.125: INFO: node-exporter-jckjs started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:21:21.125: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:21:21.125: INFO: Container node-exporter ready: true, restart count 0 May 4 16:21:21.125: INFO: kube-scheduler-master1 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.125: INFO: Container kube-scheduler ready: true, restart count 0 May 4 16:21:21.125: INFO: kube-apiserver-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.125: INFO: Container kube-apiserver ready: true, restart count 0 W0504 16:21:21.138456 24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:21:21.164: INFO: Latency metrics for node master1 May 4 16:21:21.164: INFO: Logging node info for node master2 May 4 16:21:21.167: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 e2c15170-247b-4e7b-b818-abc807948bf8 40198 0 2021-05-04 14:43:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:e0:10:a0:e0:62"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:32 +0000 UTC,LastTransitionTime:2021-05-04 14:47:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:18 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:18 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:18 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:21:18 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6af568f56589422a9bd68e0270ce0f8c,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:bf27bb77-fad2-4b52-85c3-acb5113fc512,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:21:21.167: INFO: Logging kubelet events for node master2 May 4 16:21:21.169: INFO: Logging pods the kubelet thinks is on node master2 May 4 16:21:21.176: INFO: node-exporter-9c6qf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:21:21.176: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:21:21.176: INFO: Container node-exporter ready: true, restart count 0 May 4 16:21:21.176: INFO: kube-apiserver-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.176: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:21:21.176: INFO: kube-controller-manager-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.176: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:21:21.176: INFO: kube-scheduler-master2 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.176: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:21:21.176: INFO: kube-proxy-6b5t8 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.176: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:21:21.176: INFO: kube-flannel-cxdfr started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:21:21.176: INFO: Init container install-cni ready: true, restart count 0 May 4 16:21:21.176: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:21:21.176: INFO: kube-multus-ds-amd64-dw8tg started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.176: INFO: Container kube-multus ready: true, restart count 1 May 4 16:21:21.176: INFO: dns-autoscaler-5b7b5c9b6f-zbrsq started at 2021-05-04 14:46:08 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.176: INFO: Container autoscaler ready: true, restart count 1 W0504 16:21:21.190284 24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:21:21.216: INFO: Latency metrics for node master2 May 4 16:21:21.216: INFO: Logging node info for node master3 May 4 16:21:21.220: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 b533a646-667e-403c-944b-71dec9cc4851 40197 0 2021-05-04 14:43:51 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:b0:53:14:f6:c9"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:48:50 +0000 UTC,LastTransitionTime:2021-05-04 14:48:50 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:18 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:18 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:18 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:21:18 +0000 UTC,LastTransitionTime:2021-05-04 14:45:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:959373dcda56494486f0c2bb0bb496cc,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:0714ca81-c21e-40d6-a288-48d597238e54,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:21:21.220: INFO: Logging kubelet events for node master3 May 4 16:21:21.223: INFO: Logging pods the kubelet thinks is on node master3 May 4 16:21:21.231: INFO: node-exporter-wvppn started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:21:21.231: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:21:21.231: INFO: Container node-exporter ready: true, restart count 0 May 4 16:21:21.231: INFO: kube-apiserver-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.231: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:21:21.231: INFO: kube-controller-manager-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.231: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:21:21.231: INFO: kube-scheduler-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.231: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:21:21.231: INFO: kube-proxy-2p5b6 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.231: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:21:21.231: INFO: kube-flannel-wznt8 started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:21:21.231: INFO: Init container install-cni ready: true, restart count 0 May 4 16:21:21.231: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:21:21.231: INFO: kube-multus-ds-amd64-cgwz2 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.231: INFO: Container kube-multus ready: true, restart count 1 May 4 16:21:21.231: INFO: coredns-7677f9bb54-pshfb started at 2021-05-04 14:46:06 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.231: INFO: Container coredns ready: true, restart count 1 W0504 16:21:21.244242 24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:21:21.266: INFO: Latency metrics for node master3 May 4 16:21:21.266: INFO: Logging node info for node node1 May 4 16:21:21.269: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 b8cf7e16-d5c7-4e2c-996a-93d93bd4fa1c 40103 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"e2:50:df:03:d2:13"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:54:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:13 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:13 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:13 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:21:13 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bffc023a4ab84df0b0181bc7b8f509e2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:dc08af72-abca-4f1d-bd0f-0e8d8eb97de5,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[@ :],SizeBytes:1002569035,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:371dc6bf7e0c7ce112a29341b000c40d840aef1dbb4fdcb3ae5c0597e28f3061 golang:alpine3.12],SizeBytes:301097267,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:21:21.270: INFO: Logging kubelet events for node node1 May 4 16:21:21.272: INFO: Logging pods the kubelet thinks is on node node1 May 4 16:21:21.402: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.402: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 16:21:21.402: INFO: cmk-slg76 started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:21:21.402: INFO: Container nodereport ready: true, restart count 0 May 4 16:21:21.402: INFO: Container reconcile ready: true, restart count 0 May 4 16:21:21.402: INFO: prometheus-k8s-0 started at 2021-05-04 14:56:12 +0000 UTC (0+5 container statuses recorded) May 4 16:21:21.402: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:21:21.402: INFO: Container grafana ready: true, restart count 0 May 4 16:21:21.402: INFO: Container prometheus ready: true, restart count 1 May 4 16:21:21.402: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:21:21.402: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 4 16:21:21.402: INFO: netserver-0 started at 2021-05-04 16:21:19 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.402: INFO: Container webserver ready: false, restart count 0 May 4 16:21:21.402: INFO: kube-flannel-d6pbl started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:21:21.402: INFO: Init container install-cni ready: true, restart count 2 May 4 16:21:21.402: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:21:21.402: INFO: node-feature-discovery-worker-wfgl5 started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.402: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:21:21.402: INFO: prometheus-operator-5bb8cb9d8f-rrrhf started at 2021-05-04 14:56:03 +0000 UTC (0+2 container statuses recorded) May 4 16:21:21.402: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:21:21.402: INFO: Container prometheus-operator ready: true, restart count 0 May 4 16:21:21.402: INFO: node-exporter-k8qd9 started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:21:21.402: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:21:21.402: INFO: Container node-exporter ready: true, restart count 0 May 4 16:21:21.402: INFO: collectd-4755t started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:21:21.402: INFO: Container collectd ready: true, restart count 0 May 4 16:21:21.402: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:21:21.402: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:21:21.402: INFO: fail-once-local-ltx4r started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.402: INFO: Container c ready: false, restart count 0 May 4 16:21:21.402: INFO: pod-adoption started at 2021-05-04 16:21:10 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.402: INFO: Container pod-adoption ready: false, restart count 0 May 4 16:21:21.402: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.402: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:21:21.402: INFO: nginx-proxy-node1 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.402: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:21:21.402: INFO: kube-proxy-t2mbn started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.402: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:21:21.402: INFO: liveness-http started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.402: INFO: Container liveness-http ready: true, restart count 19 May 4 16:21:21.402: INFO: server-envvars-e2e8d4b8-6525-4f40-9a98-8cccf5c227b4 started at 2021-05-04 16:10:40 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.402: INFO: Container srv ready: true, restart count 0 May 4 16:21:21.402: INFO: cmk-init-discover-node1-m8vvw started at 2021-05-04 14:54:32 +0000 UTC (0+3 container statuses recorded) May 4 16:21:21.402: INFO: Container discover ready: false, restart count 0 May 4 16:21:21.402: INFO: Container init ready: false, restart count 0 May 4 16:21:21.402: INFO: Container install ready: false, restart count 0 May 4 16:21:21.402: INFO: client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49 started at 2021-05-04 16:20:49 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.402: INFO: Container env3cont ready: false, restart count 0 May 4 16:21:21.402: INFO: busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01 started at 2021-05-04 16:21:18 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.402: INFO: Container busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01 ready: false, restart count 0 May 4 16:21:21.402: INFO: kube-multus-ds-amd64-pkmbz started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.402: INFO: Container kube-multus ready: true, restart count 1 W0504 16:21:21.415864 24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:21:21.458: INFO: Latency metrics for node node1 May 4 16:21:21.458: INFO: Logging node info for node node2 May 4 16:21:21.463: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 41567fa7-bb24-4381-9387-e4115195037d 40081 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:98:f5:3b:98:5c"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:11 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:11 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:11 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:21:11 +0000 UTC,LastTransitionTime:2021-05-04 14:45:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d47a1c7ab17f44f2ae7ff788700a8d74,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:df3523a1-a74f-4f8b-beb1-29f5ed8699f3,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:21:21.463: INFO: Logging kubelet events for node node2 May 4 16:21:21.465: INFO: Logging pods the kubelet thinks is on node node2 May 4 16:21:21.530: INFO: liveness-exec started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.530: INFO: Container liveness-exec ready: false, restart count 6 May 4 16:21:21.530: INFO: kube-proxy-rfjjf started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.530: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:21:21.530: INFO: node-feature-discovery-worker-jzjqs started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.530: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:21:21.530: INFO: node-exporter-5lghf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:21:21.530: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:21:21.530: INFO: Container node-exporter ready: true, restart count 0 May 4 16:21:21.530: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x started at 2021-05-04 14:59:02 +0000 UTC (0+2 container statuses recorded) May 4 16:21:21.530: INFO: Container tas-controller ready: true, restart count 0 May 4 16:21:21.530: INFO: Container tas-extender ready: true, restart count 0 May 4 16:21:21.530: INFO: netserver-1 started at 2021-05-04 16:21:20 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.530: INFO: Container webserver ready: false, restart count 0 May 4 16:21:21.530: INFO: kube-multus-ds-amd64-7r2s4 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.530: INFO: Container kube-multus ready: true, restart count 1 May 4 16:21:21.530: INFO: pod-update-58c100c1-80db-41aa-82d8-3e236dfc5b91 started at 2021-05-04 16:16:21 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.530: INFO: Container nginx ready: false, restart count 0 May 4 16:21:21.530: INFO: termination-message-container10779dbf-3a4f-48c1-86c0-3b0ea708da7c started at 2021-05-04 16:16:44 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.530: INFO: Container termination-message-container ready: false, restart count 0 May 4 16:21:21.530: INFO: pod-init-485103d2-8ff5-4cc8-93a4-a2bc5ba380ee started at 2021-05-04 16:17:49 +0000 UTC (2+1 container statuses recorded) May 4 16:21:21.530: INFO: Init container init1 ready: false, restart count 0 May 4 16:21:21.530: INFO: Init container init2 ready: false, restart count 0 May 4 16:21:21.530: INFO: Container run1 ready: false, restart count 0 May 4 16:21:21.530: INFO: cmk-init-discover-node2-zlxzj started at 2021-05-04 14:54:52 +0000 UTC (0+3 container statuses recorded) May 4 16:21:21.530: INFO: Container discover ready: false, restart count 0 May 4 16:21:21.530: INFO: Container init ready: false, restart count 0 May 4 16:21:21.530: INFO: Container install ready: false, restart count 0 May 4 16:21:21.530: INFO: collectd-dhwfp started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:21:21.530: INFO: Container collectd ready: true, restart count 0 May 4 16:21:21.530: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:21:21.530: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:21:21.530: INFO: fail-once-local-bkr6m started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.530: INFO: Container c ready: false, restart count 0 May 4 16:21:21.530: INFO: nginx-proxy-node2 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.530: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:21:21.530: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.530: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 4 16:21:21.530: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.530: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:21:21.530: INFO: cmk-2fmbx started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:21:21.530: INFO: Container nodereport ready: true, restart count 0 May 4 16:21:21.530: INFO: Container reconcile ready: true, restart count 0 May 4 16:21:21.530: INFO: to-be-attached-pod started at 2021-05-04 16:21:14 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.530: INFO: Container container1 ready: true, restart count 0 May 4 16:21:21.530: INFO: busybox-fe2bb9a9-1bbd-4e3b-bdc3-65746a06d3c0 started at 2021-05-04 16:20:23 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.530: INFO: Container busybox ready: false, restart count 0 May 4 16:21:21.530: INFO: kube-flannel-lnwkk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:21:21.530: INFO: Init container install-cni ready: true, restart count 2 May 4 16:21:21.530: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:21:21.530: INFO: cmk-webhook-6c9d5f8578-fr595 started at 2021-05-04 14:55:15 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.530: INFO: Container cmk-webhook ready: true, restart count 0 May 4 16:21:21.530: INFO: ss-0 started at 2021-05-04 16:17:34 +0000 UTC (0+1 container statuses recorded) May 4 16:21:21.531: INFO: Container webserver ready: false, restart count 0 W0504 16:21:21.541839 24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:21:21.694: INFO: Latency metrics for node node2 May 4 16:21:21.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1586" for this suite. • Failure [300.644 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be updated [NodeConformance] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:21:21.101: Unexpected error: <*errors.errorString | 0xc0002c2200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:103 ------------------------------ {"msg":"FAILED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":244,"failed":4,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","[k8s.io] Pods should be updated [NodeConformance] [Conformance]"]} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:21:21.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:21:25.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3525" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":14,"skipped":254,"failed":4,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","[k8s.io] Pods should be updated [NodeConformance] [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:21:25.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1384 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1384;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1384 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1384;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1384.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1384.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1384.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1384.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1384.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1384.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1384.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1384.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1384.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1384.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1384.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1384.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1384.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 32.19.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.19.32_udp@PTR;check="$$(dig +tcp +noall +answer +search 32.19.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.19.32_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1384 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1384;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1384 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1384;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1384.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1384.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1384.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1384.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1384.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1384.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1384.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1384.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1384.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1384.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1384.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1384.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1384.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 32.19.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.19.32_udp@PTR;check="$$(dig +tcp +noall +answer +search 32.19.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.19.32_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 4 16:21:33.910: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1384/dns-test-24e74e07-77c7-42a5-a76d-9e215695e938: the server could not find the requested resource (get pods dns-test-24e74e07-77c7-42a5-a76d-9e215695e938) May 4 16:21:33.912: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1384/dns-test-24e74e07-77c7-42a5-a76d-9e215695e938: the server could not find the requested resource (get pods dns-test-24e74e07-77c7-42a5-a76d-9e215695e938) May 4 16:21:33.916: INFO: Unable to read wheezy_udp@dns-test-service.dns-1384 from pod dns-1384/dns-test-24e74e07-77c7-42a5-a76d-9e215695e938: the server could not find the requested resource (get pods dns-test-24e74e07-77c7-42a5-a76d-9e215695e938) May 4 16:21:33.919: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1384 from pod dns-1384/dns-test-24e74e07-77c7-42a5-a76d-9e215695e938: the server could not find the requested resource (get pods dns-test-24e74e07-77c7-42a5-a76d-9e215695e938) May 4 16:21:33.922: INFO: Unable to read wheezy_udp@dns-test-service.dns-1384.svc from pod dns-1384/dns-test-24e74e07-77c7-42a5-a76d-9e215695e938: the server could not find the requested resource (get pods dns-test-24e74e07-77c7-42a5-a76d-9e215695e938) May 4 16:21:33.926: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1384.svc from pod dns-1384/dns-test-24e74e07-77c7-42a5-a76d-9e215695e938: the server could not find the requested resource (get pods dns-test-24e74e07-77c7-42a5-a76d-9e215695e938) May 4 16:21:33.929: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1384.svc from pod dns-1384/dns-test-24e74e07-77c7-42a5-a76d-9e215695e938: the server could not find the requested resource (get pods dns-test-24e74e07-77c7-42a5-a76d-9e215695e938) May 4 16:21:33.932: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1384.svc from pod dns-1384/dns-test-24e74e07-77c7-42a5-a76d-9e215695e938: the server could not find the requested resource (get pods dns-test-24e74e07-77c7-42a5-a76d-9e215695e938) May 4 16:21:33.953: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1384/dns-test-24e74e07-77c7-42a5-a76d-9e215695e938: the server could not find the requested resource (get pods dns-test-24e74e07-77c7-42a5-a76d-9e215695e938) May 4 16:21:33.955: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1384/dns-test-24e74e07-77c7-42a5-a76d-9e215695e938: the server could not find the requested resource (get pods dns-test-24e74e07-77c7-42a5-a76d-9e215695e938) May 4 16:21:33.958: INFO: Unable to read jessie_udp@dns-test-service.dns-1384 from pod dns-1384/dns-test-24e74e07-77c7-42a5-a76d-9e215695e938: the server could not find the requested resource (get pods dns-test-24e74e07-77c7-42a5-a76d-9e215695e938) May 4 16:21:33.962: INFO: Unable to read jessie_tcp@dns-test-service.dns-1384 from pod dns-1384/dns-test-24e74e07-77c7-42a5-a76d-9e215695e938: the server could not find the requested resource (get pods dns-test-24e74e07-77c7-42a5-a76d-9e215695e938) May 4 16:21:33.965: INFO: Unable to read jessie_udp@dns-test-service.dns-1384.svc from pod dns-1384/dns-test-24e74e07-77c7-42a5-a76d-9e215695e938: the server could not find the requested resource (get pods dns-test-24e74e07-77c7-42a5-a76d-9e215695e938) May 4 16:21:33.968: INFO: Unable to read jessie_tcp@dns-test-service.dns-1384.svc from pod dns-1384/dns-test-24e74e07-77c7-42a5-a76d-9e215695e938: the server could not find the requested resource (get pods dns-test-24e74e07-77c7-42a5-a76d-9e215695e938) May 4 16:21:33.971: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1384.svc from pod dns-1384/dns-test-24e74e07-77c7-42a5-a76d-9e215695e938: the server could not find the requested resource (get pods dns-test-24e74e07-77c7-42a5-a76d-9e215695e938) May 4 16:21:33.974: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1384.svc from pod dns-1384/dns-test-24e74e07-77c7-42a5-a76d-9e215695e938: the server could not find the requested resource (get pods dns-test-24e74e07-77c7-42a5-a76d-9e215695e938) May 4 16:21:33.991: INFO: Lookups using dns-1384/dns-test-24e74e07-77c7-42a5-a76d-9e215695e938 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1384 wheezy_tcp@dns-test-service.dns-1384 wheezy_udp@dns-test-service.dns-1384.svc wheezy_tcp@dns-test-service.dns-1384.svc wheezy_udp@_http._tcp.dns-test-service.dns-1384.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1384.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1384 jessie_tcp@dns-test-service.dns-1384 jessie_udp@dns-test-service.dns-1384.svc jessie_tcp@dns-test-service.dns-1384.svc jessie_udp@_http._tcp.dns-test-service.dns-1384.svc jessie_tcp@_http._tcp.dns-test-service.dns-1384.svc] May 4 16:21:39.067: INFO: DNS probes using dns-1384/dns-test-24e74e07-77c7-42a5-a76d-9e215695e938 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:21:39.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1384" for this suite. • [SLOW TEST:13.256 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":15,"skipped":270,"failed":4,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","[k8s.io] Pods should be updated [NodeConformance] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:16:44.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded May 4 16:21:44.330: FAIL: Timed out after 300.003s. Expected : Pending to equal : Succeeded Full Stack Trace k8s.io/kubernetes/test/e2e/common.glob..func25.1.2.1(0x4c5d8a0, 0x1d, 0xc0004ec0c0, 0x1e, 0xc0064bc000, 0x2, 0x2, 0xc0064ae040, 0x1, 0x1, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:154 +0x3f1 k8s.io/kubernetes/test/e2e/common.glob..func25.1.2.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:211 +0x292 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002947080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc002947080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc002947080, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "container-runtime-5315". STEP: Found 9 events. May 4 16:21:44.342: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for termination-message-container10779dbf-3a4f-48c1-86c0-3b0ea708da7c: { } Scheduled: Successfully assigned container-runtime-5315/termination-message-container10779dbf-3a4f-48c1-86c0-3b0ea708da7c to node2 May 4 16:21:44.342: INFO: At 2021-05-04 16:16:45 +0000 UTC - event for termination-message-container10779dbf-3a4f-48c1-86c0-3b0ea708da7c: {multus } AddedInterface: Add eth0 [10.244.3.211/24] May 4 16:21:44.342: INFO: At 2021-05-04 16:16:45 +0000 UTC - event for termination-message-container10779dbf-3a4f-48c1-86c0-3b0ea708da7c: {kubelet node2} Pulling: Pulling image "docker.io/library/busybox:1.29" May 4 16:21:44.342: INFO: At 2021-05-04 16:16:46 +0000 UTC - event for termination-message-container10779dbf-3a4f-48c1-86c0-3b0ea708da7c: {kubelet node2} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 4 16:21:44.342: INFO: At 2021-05-04 16:16:46 +0000 UTC - event for termination-message-container10779dbf-3a4f-48c1-86c0-3b0ea708da7c: {kubelet node2} Failed: Error: ErrImagePull May 4 16:21:44.342: INFO: At 2021-05-04 16:16:48 +0000 UTC - event for termination-message-container10779dbf-3a4f-48c1-86c0-3b0ea708da7c: {kubelet node2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 4 16:21:44.342: INFO: At 2021-05-04 16:16:50 +0000 UTC - event for termination-message-container10779dbf-3a4f-48c1-86c0-3b0ea708da7c: {multus } AddedInterface: Add eth0 [10.244.3.213/24] May 4 16:21:44.342: INFO: At 2021-05-04 16:16:50 +0000 UTC - event for termination-message-container10779dbf-3a4f-48c1-86c0-3b0ea708da7c: {kubelet node2} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 4 16:21:44.342: INFO: At 2021-05-04 16:16:50 +0000 UTC - event for termination-message-container10779dbf-3a4f-48c1-86c0-3b0ea708da7c: {kubelet node2} Failed: Error: ImagePullBackOff May 4 16:21:44.344: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:21:44.344: INFO: May 4 16:21:44.348: INFO: Logging node info for node master1 May 4 16:21:44.350: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 db982204-549e-4532-90a7-a4410878cfc9 40528 0 2021-05-04 14:43:01 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"3e:f0:43:cb:66:52"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-04 14:51:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:46 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:39 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:39 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:39 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:21:39 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:88a0771919594d4187f6704fc7592bf8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8e0a253b-2aa4-4467-879e-567e7ba1ffa4,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:21:44.351: INFO: Logging kubelet events for node master1 May 4 16:21:44.353: INFO: Logging pods the kubelet thinks is on node master1 May 4 16:21:44.362: INFO: kube-apiserver-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.362: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:21:44.362: INFO: kube-controller-manager-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.362: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:21:44.362: INFO: kube-proxy-8j6ch started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.362: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:21:44.362: INFO: docker-registry-docker-registry-56cbc7bc58-zhf8t started at 2021-05-04 14:48:42 +0000 UTC (0+2 container statuses recorded) May 4 16:21:44.362: INFO: Container docker-registry ready: true, restart count 0 May 4 16:21:44.362: INFO: Container nginx ready: true, restart count 0 May 4 16:21:44.362: INFO: node-exporter-jckjs started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:21:44.362: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:21:44.362: INFO: Container node-exporter ready: true, restart count 0 May 4 16:21:44.362: INFO: kube-scheduler-master1 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.362: INFO: Container kube-scheduler ready: true, restart count 0 May 4 16:21:44.362: INFO: kube-flannel-qspzk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:21:44.362: INFO: Init container install-cni ready: true, restart count 0 May 4 16:21:44.362: INFO: Container kube-flannel ready: true, restart count 3 May 4 16:21:44.362: INFO: kube-multus-ds-amd64-jflvf started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.362: INFO: Container kube-multus ready: true, restart count 1 May 4 16:21:44.362: INFO: coredns-7677f9bb54-qvcd2 started at 2021-05-04 14:46:11 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.362: INFO: Container coredns ready: true, restart count 1 May 4 16:21:44.362: INFO: node-feature-discovery-controller-5bf5c49849-72rn6 started at 2021-05-04 14:51:52 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.362: INFO: Container nfd-controller ready: true, restart count 0 W0504 16:21:44.375622 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:21:44.407: INFO: Latency metrics for node master1 May 4 16:21:44.407: INFO: Logging node info for node master2 May 4 16:21:44.410: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 e2c15170-247b-4e7b-b818-abc807948bf8 40494 0 2021-05-04 14:43:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:e0:10:a0:e0:62"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:32 +0000 UTC,LastTransitionTime:2021-05-04 14:47:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:38 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:38 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:38 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:21:38 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6af568f56589422a9bd68e0270ce0f8c,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:bf27bb77-fad2-4b52-85c3-acb5113fc512,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:21:44.410: INFO: Logging kubelet events for node master2 May 4 16:21:44.412: INFO: Logging pods the kubelet thinks is on node master2 May 4 16:21:44.421: INFO: kube-flannel-cxdfr started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:21:44.421: INFO: Init container install-cni ready: true, restart count 0 May 4 16:21:44.421: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:21:44.421: INFO: kube-multus-ds-amd64-dw8tg started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.421: INFO: Container kube-multus ready: true, restart count 1 May 4 16:21:44.421: INFO: dns-autoscaler-5b7b5c9b6f-zbrsq started at 2021-05-04 14:46:08 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.421: INFO: Container autoscaler ready: true, restart count 1 May 4 16:21:44.421: INFO: node-exporter-9c6qf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:21:44.421: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:21:44.421: INFO: Container node-exporter ready: true, restart count 0 May 4 16:21:44.421: INFO: kube-apiserver-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.421: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:21:44.421: INFO: kube-controller-manager-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.421: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:21:44.421: INFO: kube-scheduler-master2 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.421: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:21:44.421: INFO: kube-proxy-6b5t8 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.421: INFO: Container kube-proxy ready: true, restart count 2 W0504 16:21:44.434820 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:21:44.465: INFO: Latency metrics for node master2 May 4 16:21:44.465: INFO: Logging node info for node master3 May 4 16:21:44.468: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 b533a646-667e-403c-944b-71dec9cc4851 40493 0 2021-05-04 14:43:51 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:b0:53:14:f6:c9"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:48:50 +0000 UTC,LastTransitionTime:2021-05-04 14:48:50 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:38 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:38 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:38 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:21:38 +0000 UTC,LastTransitionTime:2021-05-04 14:45:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:959373dcda56494486f0c2bb0bb496cc,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:0714ca81-c21e-40d6-a288-48d597238e54,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:21:44.468: INFO: Logging kubelet events for node master3 May 4 16:21:44.470: INFO: Logging pods the kubelet thinks is on node master3 May 4 16:21:44.478: INFO: coredns-7677f9bb54-pshfb started at 2021-05-04 14:46:06 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.478: INFO: Container coredns ready: true, restart count 1 May 4 16:21:44.478: INFO: node-exporter-wvppn started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:21:44.478: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:21:44.478: INFO: Container node-exporter ready: true, restart count 0 May 4 16:21:44.478: INFO: kube-apiserver-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.478: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:21:44.478: INFO: kube-controller-manager-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.478: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:21:44.478: INFO: kube-scheduler-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.479: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:21:44.479: INFO: kube-proxy-2p5b6 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.479: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:21:44.479: INFO: kube-flannel-wznt8 started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:21:44.479: INFO: Init container install-cni ready: true, restart count 0 May 4 16:21:44.479: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:21:44.479: INFO: kube-multus-ds-amd64-cgwz2 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.479: INFO: Container kube-multus ready: true, restart count 1 W0504 16:21:44.490737 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:21:44.515: INFO: Latency metrics for node master3 May 4 16:21:44.515: INFO: Logging node info for node node1 May 4 16:21:44.517: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 b8cf7e16-d5c7-4e2c-996a-93d93bd4fa1c 40571 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"e2:50:df:03:d2:13"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:54:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:43 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:43 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:43 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:21:43 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bffc023a4ab84df0b0181bc7b8f509e2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:dc08af72-abca-4f1d-bd0f-0e8d8eb97de5,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[@ :],SizeBytes:1002569035,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:371dc6bf7e0c7ce112a29341b000c40d840aef1dbb4fdcb3ae5c0597e28f3061 golang:alpine3.12],SizeBytes:301097267,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:21:44.518: INFO: Logging kubelet events for node node1 May 4 16:21:44.519: INFO: Logging pods the kubelet thinks is on node node1 May 4 16:21:44.535: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.535: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:21:44.535: INFO: nginx-proxy-node1 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.535: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:21:44.535: INFO: kube-proxy-t2mbn started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.535: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:21:44.535: INFO: liveness-http started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.535: INFO: Container liveness-http ready: true, restart count 19 May 4 16:21:44.535: INFO: server-envvars-e2e8d4b8-6525-4f40-9a98-8cccf5c227b4 started at 2021-05-04 16:10:40 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.535: INFO: Container srv ready: true, restart count 0 May 4 16:21:44.535: INFO: pod-projected-configmaps-2f4387b8-0930-47f9-8f86-6498d10cab39 started at 2021-05-04 16:21:39 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.535: INFO: Container projected-configmap-volume-test ready: true, restart count 0 May 4 16:21:44.535: INFO: host-test-container-pod started at 2021-05-04 16:21:42 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.535: INFO: Container agnhost-container ready: true, restart count 0 May 4 16:21:44.535: INFO: cmk-init-discover-node1-m8vvw started at 2021-05-04 14:54:32 +0000 UTC (0+3 container statuses recorded) May 4 16:21:44.535: INFO: Container discover ready: false, restart count 0 May 4 16:21:44.535: INFO: Container init ready: false, restart count 0 May 4 16:21:44.535: INFO: Container install ready: false, restart count 0 May 4 16:21:44.535: INFO: client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49 started at 2021-05-04 16:20:49 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.535: INFO: Container env3cont ready: false, restart count 0 May 4 16:21:44.535: INFO: busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01 started at 2021-05-04 16:21:18 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.535: INFO: Container busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01 ready: false, restart count 0 May 4 16:21:44.535: INFO: kube-multus-ds-amd64-pkmbz started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.535: INFO: Container kube-multus ready: true, restart count 1 May 4 16:21:44.535: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.535: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 16:21:44.535: INFO: cmk-slg76 started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:21:44.535: INFO: Container nodereport ready: true, restart count 0 May 4 16:21:44.535: INFO: Container reconcile ready: true, restart count 0 May 4 16:21:44.535: INFO: prometheus-k8s-0 started at 2021-05-04 14:56:12 +0000 UTC (0+5 container statuses recorded) May 4 16:21:44.535: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:21:44.535: INFO: Container grafana ready: true, restart count 0 May 4 16:21:44.535: INFO: Container prometheus ready: true, restart count 1 May 4 16:21:44.535: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:21:44.535: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 4 16:21:44.535: INFO: netserver-0 started at 2021-05-04 16:21:19 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.535: INFO: Container webserver ready: true, restart count 0 May 4 16:21:44.535: INFO: test-container-pod started at 2021-05-04 16:21:42 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.535: INFO: Container webserver ready: true, restart count 0 May 4 16:21:44.535: INFO: kube-flannel-d6pbl started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:21:44.535: INFO: Init container install-cni ready: true, restart count 2 May 4 16:21:44.535: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:21:44.535: INFO: node-feature-discovery-worker-wfgl5 started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.535: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:21:44.535: INFO: prometheus-operator-5bb8cb9d8f-rrrhf started at 2021-05-04 14:56:03 +0000 UTC (0+2 container statuses recorded) May 4 16:21:44.535: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:21:44.535: INFO: Container prometheus-operator ready: true, restart count 0 May 4 16:21:44.535: INFO: node-exporter-k8qd9 started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:21:44.535: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:21:44.535: INFO: Container node-exporter ready: true, restart count 0 May 4 16:21:44.535: INFO: collectd-4755t started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:21:44.535: INFO: Container collectd ready: true, restart count 0 May 4 16:21:44.535: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:21:44.535: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:21:44.535: INFO: fail-once-local-ltx4r started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.536: INFO: Container c ready: false, restart count 0 May 4 16:21:44.536: INFO: pod-adoption started at 2021-05-04 16:21:10 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.536: INFO: Container pod-adoption ready: false, restart count 0 W0504 16:21:44.548367 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:21:44.598: INFO: Latency metrics for node node1 May 4 16:21:44.598: INFO: Logging node info for node node2 May 4 16:21:44.601: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 41567fa7-bb24-4381-9387-e4115195037d 40555 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:98:f5:3b:98:5c"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:42 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:42 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:21:42 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:21:42 +0000 UTC,LastTransitionTime:2021-05-04 14:45:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d47a1c7ab17f44f2ae7ff788700a8d74,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:df3523a1-a74f-4f8b-beb1-29f5ed8699f3,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:21:44.602: INFO: Logging kubelet events for node node2 May 4 16:21:44.604: INFO: Logging pods the kubelet thinks is on node node2 May 4 16:21:44.633: INFO: cmk-init-discover-node2-zlxzj started at 2021-05-04 14:54:52 +0000 UTC (0+3 container statuses recorded) May 4 16:21:44.633: INFO: Container discover ready: false, restart count 0 May 4 16:21:44.633: INFO: Container init ready: false, restart count 0 May 4 16:21:44.633: INFO: Container install ready: false, restart count 0 May 4 16:21:44.633: INFO: collectd-dhwfp started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:21:44.633: INFO: Container collectd ready: true, restart count 0 May 4 16:21:44.633: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:21:44.633: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:21:44.633: INFO: fail-once-local-bkr6m started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.633: INFO: Container c ready: false, restart count 0 May 4 16:21:44.633: INFO: nginx-proxy-node2 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.633: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:21:44.633: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.633: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 4 16:21:44.633: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.633: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:21:44.633: INFO: cmk-2fmbx started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:21:44.633: INFO: Container nodereport ready: true, restart count 0 May 4 16:21:44.633: INFO: Container reconcile ready: true, restart count 0 May 4 16:21:44.633: INFO: busybox-fe2bb9a9-1bbd-4e3b-bdc3-65746a06d3c0 started at 2021-05-04 16:20:23 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.633: INFO: Container busybox ready: false, restart count 0 May 4 16:21:44.633: INFO: kube-flannel-lnwkk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:21:44.633: INFO: Init container install-cni ready: true, restart count 2 May 4 16:21:44.633: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:21:44.633: INFO: cmk-webhook-6c9d5f8578-fr595 started at 2021-05-04 14:55:15 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.633: INFO: Container cmk-webhook ready: true, restart count 0 May 4 16:21:44.633: INFO: ss-0 started at 2021-05-04 16:17:34 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.633: INFO: Container webserver ready: false, restart count 0 May 4 16:21:44.633: INFO: liveness-exec started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.633: INFO: Container liveness-exec ready: false, restart count 6 May 4 16:21:44.633: INFO: kube-proxy-rfjjf started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.633: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:21:44.633: INFO: node-feature-discovery-worker-jzjqs started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.633: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:21:44.633: INFO: node-exporter-5lghf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:21:44.633: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:21:44.633: INFO: Container node-exporter ready: true, restart count 0 May 4 16:21:44.633: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x started at 2021-05-04 14:59:02 +0000 UTC (0+2 container statuses recorded) May 4 16:21:44.633: INFO: Container tas-controller ready: true, restart count 0 May 4 16:21:44.633: INFO: Container tas-extender ready: true, restart count 0 May 4 16:21:44.633: INFO: netserver-1 started at 2021-05-04 16:21:20 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.633: INFO: Container webserver ready: true, restart count 0 May 4 16:21:44.633: INFO: kube-multus-ds-amd64-7r2s4 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:21:44.633: INFO: Container kube-multus ready: true, restart count 1 May 4 16:21:44.633: INFO: pod-init-485103d2-8ff5-4cc8-93a4-a2bc5ba380ee started at 2021-05-04 16:17:49 +0000 UTC (2+1 container statuses recorded) May 4 16:21:44.633: INFO: Init container init1 ready: false, restart count 0 May 4 16:21:44.633: INFO: Init container init2 ready: false, restart count 0 May 4 16:21:44.633: INFO: Container run1 ready: false, restart count 0 W0504 16:21:44.646379 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:21:44.676: INFO: Latency metrics for node node2 May 4 16:21:44.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5315" for this suite. • Failure [300.396 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:21:44.330: Timed out after 300.003s. Expected : Pending to equal : Succeeded /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:154 ------------------------------ {"msg":"FAILED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":443,"failed":3,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","[k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:21:39.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-10d94628-4d2f-4502-8877-c27cfdb3c62b STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-10d94628-4d2f-4502-8877-c27cfdb3c62b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:21:45.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2005" for this suite. • [SLOW TEST:6.142 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":311,"failed":4,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","[k8s.io] Pods should be updated [NodeConformance] [Conformance]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:21:19.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-3391 STEP: creating a selector STEP: Creating the service pods in kubernetes May 4 16:21:19.965: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 4 16:21:20.002: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 4 16:21:22.006: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 4 16:21:24.006: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 16:21:26.005: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 16:21:28.007: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 16:21:30.008: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 16:21:32.007: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 16:21:34.006: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 16:21:36.005: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 16:21:38.007: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 16:21:40.009: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 16:21:42.005: INFO: The status of Pod netserver-0 is Running (Ready = true) May 4 16:21:42.010: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 4 16:21:46.052: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.4.186 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3391 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 4 16:21:46.052: INFO: >>> kubeConfig: /root/.kube/config May 4 16:21:47.290: INFO: Found all expected endpoints: [netserver-0] May 4 16:21:47.293: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.3.241 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3391 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 4 16:21:47.293: INFO: >>> kubeConfig: /root/.kube/config May 4 16:21:48.393: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:21:48.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3391" for this suite. • [SLOW TEST:28.457 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":471,"failed":2,"failures":["[sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","[k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:21:45.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 4 16:21:49.392: INFO: &Pod{ObjectMeta:{send-events-92d87a38-44e1-4bb1-abde-35390ee195cd events-5541 /api/v1/namespaces/events-5541/pods/send-events-92d87a38-44e1-4bb1-abde-35390ee195cd 0ccf1a79-fe2d-4f93-be5a-5dd512e4abc2 40672 0 2021-05-04 16:21:45 +0000 UTC map[name:foo time:364239294] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.242" ], "mac": "16:d4:6e:48:dc:e4", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.242" ], "mac": "16:d4:6e:48:dc:e4", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [] [] [{e2e.test Update v1 2021-05-04 16:21:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-04 16:21:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-04 16:21:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.242\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b5cqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b5cqt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b5cqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:21:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:21:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:21:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-04 16:21:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.3.242,StartTime:2021-05-04 16:21:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-04 16:21:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:docker://f7c5a07ddab9a577262f3e0961f8a8023734f0d3d942a89bb9cebc82c138032f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.242,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 4 16:21:51.398: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 4 16:21:53.402: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:21:53.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5541" for this suite. • [SLOW TEST:8.072 seconds] [k8s.io] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":-1,"completed":17,"skipped":319,"failed":4,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","[k8s.io] Pods should be updated [NodeConformance] [Conformance]"]} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:21:48.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-0e800666-2305-4644-8095-aa6ee1199590 STEP: Creating a pod to test consume configMaps May 4 16:21:48.520: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-27e8364b-a36f-4b5a-96d0-d4b4e40dcb8c" in namespace "projected-7534" to be "Succeeded or Failed" May 4 16:21:48.522: INFO: Pod "pod-projected-configmaps-27e8364b-a36f-4b5a-96d0-d4b4e40dcb8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.530183ms May 4 16:21:50.525: INFO: Pod "pod-projected-configmaps-27e8364b-a36f-4b5a-96d0-d4b4e40dcb8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005641113s May 4 16:21:52.528: INFO: Pod "pod-projected-configmaps-27e8364b-a36f-4b5a-96d0-d4b4e40dcb8c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008747472s May 4 16:21:54.533: INFO: Pod "pod-projected-configmaps-27e8364b-a36f-4b5a-96d0-d4b4e40dcb8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013714768s STEP: Saw pod success May 4 16:21:54.533: INFO: Pod "pod-projected-configmaps-27e8364b-a36f-4b5a-96d0-d4b4e40dcb8c" satisfied condition "Succeeded or Failed" May 4 16:21:54.536: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-27e8364b-a36f-4b5a-96d0-d4b4e40dcb8c container projected-configmap-volume-test: STEP: delete the pod May 4 16:21:54.549: INFO: Waiting for pod pod-projected-configmaps-27e8364b-a36f-4b5a-96d0-d4b4e40dcb8c to disappear May 4 16:21:54.551: INFO: Pod pod-projected-configmaps-27e8364b-a36f-4b5a-96d0-d4b4e40dcb8c no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:21:54.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7534" for this suite. • [SLOW TEST:6.079 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":511,"failed":2,"failures":["[sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","[k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:21:53.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 4 16:21:53.475: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f20e23ce-6f27-462e-8faf-295ad0bbd610" in namespace "downward-api-8157" to be "Succeeded or Failed" May 4 16:21:53.477: INFO: Pod "downwardapi-volume-f20e23ce-6f27-462e-8faf-295ad0bbd610": Phase="Pending", Reason="", readiness=false. Elapsed: 2.170866ms May 4 16:21:55.480: INFO: Pod "downwardapi-volume-f20e23ce-6f27-462e-8faf-295ad0bbd610": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005189244s May 4 16:21:57.483: INFO: Pod "downwardapi-volume-f20e23ce-6f27-462e-8faf-295ad0bbd610": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007525206s STEP: Saw pod success May 4 16:21:57.483: INFO: Pod "downwardapi-volume-f20e23ce-6f27-462e-8faf-295ad0bbd610" satisfied condition "Succeeded or Failed" May 4 16:21:57.485: INFO: Trying to get logs from node node2 pod downwardapi-volume-f20e23ce-6f27-462e-8faf-295ad0bbd610 container client-container: STEP: delete the pod May 4 16:21:57.497: INFO: Waiting for pod downwardapi-volume-f20e23ce-6f27-462e-8faf-295ad0bbd610 to disappear May 4 16:21:57.499: INFO: Pod downwardapi-volume-f20e23ce-6f27-462e-8faf-295ad0bbd610 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:21:57.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8157" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":325,"failed":4,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","[k8s.io] Pods should be updated [NodeConformance] [Conformance]"]} S ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:21:44.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6287.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-6287.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6287.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6287.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6287.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-6287.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6287.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-6287.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6287.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6287.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-6287.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6287.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-6287.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6287.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-6287.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6287.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-6287.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6287.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 4 16:21:48.784: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6287.svc.cluster.local from pod dns-6287/dns-test-87399d7b-cef7-4db3-8652-762860306404: the server could not find the requested resource (get pods dns-test-87399d7b-cef7-4db3-8652-762860306404) May 4 16:21:48.787: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6287.svc.cluster.local from pod dns-6287/dns-test-87399d7b-cef7-4db3-8652-762860306404: the server could not find the requested resource (get pods dns-test-87399d7b-cef7-4db3-8652-762860306404) May 4 16:21:48.790: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6287.svc.cluster.local from pod dns-6287/dns-test-87399d7b-cef7-4db3-8652-762860306404: the server could not find the requested resource (get pods dns-test-87399d7b-cef7-4db3-8652-762860306404) May 4 16:21:48.792: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6287.svc.cluster.local from pod dns-6287/dns-test-87399d7b-cef7-4db3-8652-762860306404: the server could not find the requested resource (get pods dns-test-87399d7b-cef7-4db3-8652-762860306404) May 4 16:21:48.799: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6287.svc.cluster.local from pod dns-6287/dns-test-87399d7b-cef7-4db3-8652-762860306404: the server could not find the requested resource (get pods dns-test-87399d7b-cef7-4db3-8652-762860306404) May 4 16:21:48.802: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6287.svc.cluster.local from pod dns-6287/dns-test-87399d7b-cef7-4db3-8652-762860306404: the server could not find the requested resource (get pods dns-test-87399d7b-cef7-4db3-8652-762860306404) May 4 16:21:48.804: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6287.svc.cluster.local from pod dns-6287/dns-test-87399d7b-cef7-4db3-8652-762860306404: the server could not find the requested resource (get pods dns-test-87399d7b-cef7-4db3-8652-762860306404) May 4 16:21:48.806: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6287.svc.cluster.local from pod dns-6287/dns-test-87399d7b-cef7-4db3-8652-762860306404: the server could not find the requested resource (get pods dns-test-87399d7b-cef7-4db3-8652-762860306404) May 4 16:21:48.812: INFO: Lookups using dns-6287/dns-test-87399d7b-cef7-4db3-8652-762860306404 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6287.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6287.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6287.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6287.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6287.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6287.svc.cluster.local jessie_udp@dns-test-service-2.dns-6287.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6287.svc.cluster.local] May 4 16:21:53.822: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6287.svc.cluster.local from pod dns-6287/dns-test-87399d7b-cef7-4db3-8652-762860306404: the server could not find the requested resource (get pods dns-test-87399d7b-cef7-4db3-8652-762860306404) May 4 16:21:53.825: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6287.svc.cluster.local from pod dns-6287/dns-test-87399d7b-cef7-4db3-8652-762860306404: the server could not find the requested resource (get pods dns-test-87399d7b-cef7-4db3-8652-762860306404) May 4 16:21:53.848: INFO: Lookups using dns-6287/dns-test-87399d7b-cef7-4db3-8652-762860306404 failed for: [wheezy_udp@dns-test-service-2.dns-6287.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6287.svc.cluster.local] May 4 16:21:58.847: INFO: DNS probes using dns-6287/dns-test-87399d7b-cef7-4db3-8652-762860306404 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:21:58.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6287" for this suite. • [SLOW TEST:14.147 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":16,"skipped":463,"failed":3,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","[k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]"]} S ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:21:58.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 4 16:21:58.912: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9f1da006-c090-492d-b9d2-00fb9ca0a0f6" in namespace "downward-api-5609" to be "Succeeded or Failed" May 4 16:21:58.915: INFO: Pod "downwardapi-volume-9f1da006-c090-492d-b9d2-00fb9ca0a0f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.291084ms May 4 16:22:00.919: INFO: Pod "downwardapi-volume-9f1da006-c090-492d-b9d2-00fb9ca0a0f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006101869s May 4 16:22:02.922: INFO: Pod "downwardapi-volume-9f1da006-c090-492d-b9d2-00fb9ca0a0f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009404958s STEP: Saw pod success May 4 16:22:02.922: INFO: Pod "downwardapi-volume-9f1da006-c090-492d-b9d2-00fb9ca0a0f6" satisfied condition "Succeeded or Failed" May 4 16:22:02.925: INFO: Trying to get logs from node node1 pod downwardapi-volume-9f1da006-c090-492d-b9d2-00fb9ca0a0f6 container client-container: STEP: delete the pod May 4 16:22:03.116: INFO: Waiting for pod downwardapi-volume-9f1da006-c090-492d-b9d2-00fb9ca0a0f6 to disappear May 4 16:22:03.119: INFO: Pod downwardapi-volume-9f1da006-c090-492d-b9d2-00fb9ca0a0f6 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:22:03.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5609" for this suite. • ------------------------------ [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:21:57.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 4 16:22:05.574: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 4 16:22:05.577: INFO: Pod pod-with-prestop-http-hook still exists May 4 16:22:07.577: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 4 16:22:07.580: INFO: Pod pod-with-prestop-http-hook still exists May 4 16:22:09.577: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 4 16:22:09.581: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:22:09.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2906" for this suite. • [SLOW TEST:12.084 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":326,"failed":4,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","[k8s.io] Pods should be updated [NodeConformance] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:22:09.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 4 16:22:10.028: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 4 16:22:12.037: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755742130, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755742130, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755742130, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755742130, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 4 16:22:15.049: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:22:15.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9206" for this suite. STEP: Destroying namespace "webhook-9206-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.332 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":20,"skipped":420,"failed":4,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","[k8s.io] Pods should be updated [NodeConformance] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:22:15.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-b56e19f6-4e77-4389-8198-ebe1bff00a66 STEP: Creating a pod to test consume configMaps May 4 16:22:15.207: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-89ea6718-ddfb-48dc-bb8b-df40934e1639" in namespace "projected-7766" to be "Succeeded or Failed" May 4 16:22:15.209: INFO: Pod "pod-projected-configmaps-89ea6718-ddfb-48dc-bb8b-df40934e1639": Phase="Pending", Reason="", readiness=false. Elapsed: 1.968919ms May 4 16:22:17.212: INFO: Pod "pod-projected-configmaps-89ea6718-ddfb-48dc-bb8b-df40934e1639": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004907644s May 4 16:22:19.216: INFO: Pod "pod-projected-configmaps-89ea6718-ddfb-48dc-bb8b-df40934e1639": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008750726s STEP: Saw pod success May 4 16:22:19.216: INFO: Pod "pod-projected-configmaps-89ea6718-ddfb-48dc-bb8b-df40934e1639" satisfied condition "Succeeded or Failed" May 4 16:22:19.218: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-89ea6718-ddfb-48dc-bb8b-df40934e1639 container projected-configmap-volume-test: STEP: delete the pod May 4 16:22:19.323: INFO: Waiting for pod pod-projected-configmaps-89ea6718-ddfb-48dc-bb8b-df40934e1639 to disappear May 4 16:22:19.325: INFO: Pod pod-projected-configmaps-89ea6718-ddfb-48dc-bb8b-df40934e1639 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:22:19.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7766" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":465,"failed":4,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","[k8s.io] Pods should be updated [NodeConformance] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:21:54.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-2257 STEP: creating a selector STEP: Creating the service pods in kubernetes May 4 16:21:54.588: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 4 16:21:54.631: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 4 16:21:56.634: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 4 16:21:58.634: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 16:22:00.634: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 16:22:02.634: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 16:22:04.635: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 16:22:06.634: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 16:22:08.635: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 16:22:10.635: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 16:22:12.634: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 16:22:14.635: INFO: The status of Pod netserver-0 is Running (Ready = true) May 4 16:22:14.639: INFO: The status of Pod netserver-1 is Running (Ready = false) May 4 16:22:16.643: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 4 16:22:20.666: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.249:8080/dial?request=hostname&protocol=http&host=10.244.4.195&port=8080&tries=1'] Namespace:pod-network-test-2257 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 4 16:22:20.666: INFO: >>> kubeConfig: /root/.kube/config May 4 16:22:20.775: INFO: Waiting for responses: map[] May 4 16:22:20.777: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.249:8080/dial?request=hostname&protocol=http&host=10.244.3.245&port=8080&tries=1'] Namespace:pod-network-test-2257 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 4 16:22:20.777: INFO: >>> kubeConfig: /root/.kube/config May 4 16:22:20.883: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:22:20.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2257" for this suite. • [SLOW TEST:26.323 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":514,"failed":2,"failures":["[sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","[k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:22:21.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions May 4 16:22:21.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8513 api-versions' May 4 16:22:21.178: INFO: stderr: "" May 4 16:22:21.178: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ncustom.metrics.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nintel.com/v1\nk8s.cni.cncf.io/v1\nmonitoring.coreos.com/v1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\ntelemetry.intel.com/v1alpha1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:22:21.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8513" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":-1,"completed":32,"skipped":596,"failed":2,"failures":["[sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","[k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]"]} SSSSSSSS ------------------------------ [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:17:49.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 4 16:17:49.524: INFO: PodSpec: initContainers in spec.initContainers May 4 16:22:49.537: FAIL: Expected <*errors.errorString | 0xc0002c4200>: { s: "timed out waiting for the condition", } to be nil Full Stack Trace k8s.io/kubernetes/test/e2e/common.glob..func11.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:225 +0xc7e k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002965080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc002965080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc002965080, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "init-container-4063". STEP: Found 7 events. May 4 16:22:49.542: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-init-485103d2-8ff5-4cc8-93a4-a2bc5ba380ee: { } Scheduled: Successfully assigned init-container-4063/pod-init-485103d2-8ff5-4cc8-93a4-a2bc5ba380ee to node2 May 4 16:22:49.542: INFO: At 2021-05-04 16:17:51 +0000 UTC - event for pod-init-485103d2-8ff5-4cc8-93a4-a2bc5ba380ee: {multus } AddedInterface: Add eth0 [10.244.3.226/24] May 4 16:22:49.542: INFO: At 2021-05-04 16:17:51 +0000 UTC - event for pod-init-485103d2-8ff5-4cc8-93a4-a2bc5ba380ee: {kubelet node2} Pulling: Pulling image "docker.io/library/busybox:1.29" May 4 16:22:49.542: INFO: At 2021-05-04 16:17:52 +0000 UTC - event for pod-init-485103d2-8ff5-4cc8-93a4-a2bc5ba380ee: {kubelet node2} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 4 16:22:49.542: INFO: At 2021-05-04 16:17:52 +0000 UTC - event for pod-init-485103d2-8ff5-4cc8-93a4-a2bc5ba380ee: {kubelet node2} Failed: Error: ErrImagePull May 4 16:22:49.542: INFO: At 2021-05-04 16:17:53 +0000 UTC - event for pod-init-485103d2-8ff5-4cc8-93a4-a2bc5ba380ee: {kubelet node2} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 4 16:22:49.542: INFO: At 2021-05-04 16:17:53 +0000 UTC - event for pod-init-485103d2-8ff5-4cc8-93a4-a2bc5ba380ee: {kubelet node2} Failed: Error: ImagePullBackOff May 4 16:22:49.544: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:22:49.544: INFO: pod-init-485103d2-8ff5-4cc8-93a4-a2bc5ba380ee node2 Pending [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:17:49 +0000 UTC ContainersNotInitialized containers with incomplete status: [init1 init2]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:17:49 +0000 UTC ContainersNotReady containers with unready status: [run1]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:17:49 +0000 UTC ContainersNotReady containers with unready status: [run1]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:17:49 +0000 UTC }] May 4 16:22:49.544: INFO: May 4 16:22:49.550: INFO: Logging node info for node master1 May 4 16:22:49.553: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 db982204-549e-4532-90a7-a4410878cfc9 41415 0 2021-05-04 14:43:01 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"3e:f0:43:cb:66:52"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-04 14:51:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:46 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:22:39 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:22:39 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:22:39 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:22:39 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:88a0771919594d4187f6704fc7592bf8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8e0a253b-2aa4-4467-879e-567e7ba1ffa4,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:22:49.553: INFO: Logging kubelet events for node master1 May 4 16:22:49.556: INFO: Logging pods the kubelet thinks is on node master1 May 4 16:22:49.565: INFO: docker-registry-docker-registry-56cbc7bc58-zhf8t started at 2021-05-04 14:48:42 +0000 UTC (0+2 container statuses recorded) May 4 16:22:49.565: INFO: Container docker-registry ready: true, restart count 0 May 4 16:22:49.565: INFO: Container nginx ready: true, restart count 0 May 4 16:22:49.565: INFO: node-exporter-jckjs started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:22:49.565: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:22:49.565: INFO: Container node-exporter ready: true, restart count 0 May 4 16:22:49.565: INFO: kube-scheduler-master1 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.565: INFO: Container kube-scheduler ready: true, restart count 0 May 4 16:22:49.565: INFO: kube-apiserver-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.565: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:22:49.565: INFO: kube-controller-manager-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.565: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:22:49.565: INFO: kube-proxy-8j6ch started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.565: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:22:49.565: INFO: coredns-7677f9bb54-qvcd2 started at 2021-05-04 14:46:11 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.565: INFO: Container coredns ready: true, restart count 1 May 4 16:22:49.565: INFO: node-feature-discovery-controller-5bf5c49849-72rn6 started at 2021-05-04 14:51:52 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.565: INFO: Container nfd-controller ready: true, restart count 0 May 4 16:22:49.565: INFO: kube-flannel-qspzk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:22:49.565: INFO: Init container install-cni ready: true, restart count 0 May 4 16:22:49.565: INFO: Container kube-flannel ready: true, restart count 3 May 4 16:22:49.565: INFO: kube-multus-ds-amd64-jflvf started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.565: INFO: Container kube-multus ready: true, restart count 1 W0504 16:22:49.579714 21 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:22:49.603: INFO: Latency metrics for node master1 May 4 16:22:49.603: INFO: Logging node info for node master2 May 4 16:22:49.606: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 e2c15170-247b-4e7b-b818-abc807948bf8 41447 0 2021-05-04 14:43:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:e0:10:a0:e0:62"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:32 +0000 UTC,LastTransitionTime:2021-05-04 14:47:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:22:49 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:22:49 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:22:49 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:22:49 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6af568f56589422a9bd68e0270ce0f8c,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:bf27bb77-fad2-4b52-85c3-acb5113fc512,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:22:49.606: INFO: Logging kubelet events for node master2 May 4 16:22:49.609: INFO: Logging pods the kubelet thinks is on node master2 May 4 16:22:49.617: INFO: kube-controller-manager-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.617: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:22:49.617: INFO: kube-scheduler-master2 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.617: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:22:49.617: INFO: kube-proxy-6b5t8 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.617: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:22:49.617: INFO: kube-flannel-cxdfr started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:22:49.617: INFO: Init container install-cni ready: true, restart count 0 May 4 16:22:49.617: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:22:49.617: INFO: kube-multus-ds-amd64-dw8tg started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.617: INFO: Container kube-multus ready: true, restart count 1 May 4 16:22:49.617: INFO: dns-autoscaler-5b7b5c9b6f-zbrsq started at 2021-05-04 14:46:08 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.617: INFO: Container autoscaler ready: true, restart count 1 May 4 16:22:49.617: INFO: node-exporter-9c6qf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:22:49.617: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:22:49.617: INFO: Container node-exporter ready: true, restart count 0 May 4 16:22:49.617: INFO: kube-apiserver-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.617: INFO: Container kube-apiserver ready: true, restart count 0 W0504 16:22:49.629221 21 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:22:49.652: INFO: Latency metrics for node master2 May 4 16:22:49.652: INFO: Logging node info for node master3 May 4 16:22:49.663: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 b533a646-667e-403c-944b-71dec9cc4851 41446 0 2021-05-04 14:43:51 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:b0:53:14:f6:c9"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:48:50 +0000 UTC,LastTransitionTime:2021-05-04 14:48:50 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:22:49 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:22:49 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:22:49 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:22:49 +0000 UTC,LastTransitionTime:2021-05-04 14:45:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:959373dcda56494486f0c2bb0bb496cc,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:0714ca81-c21e-40d6-a288-48d597238e54,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:22:49.664: INFO: Logging kubelet events for node master3 May 4 16:22:49.666: INFO: Logging pods the kubelet thinks is on node master3 May 4 16:22:49.673: INFO: kube-controller-manager-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.673: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:22:49.673: INFO: kube-scheduler-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.673: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:22:49.673: INFO: kube-proxy-2p5b6 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.673: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:22:49.673: INFO: kube-flannel-wznt8 started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:22:49.673: INFO: Init container install-cni ready: true, restart count 0 May 4 16:22:49.673: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:22:49.673: INFO: kube-multus-ds-amd64-cgwz2 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.673: INFO: Container kube-multus ready: true, restart count 1 May 4 16:22:49.673: INFO: coredns-7677f9bb54-pshfb started at 2021-05-04 14:46:06 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.673: INFO: Container coredns ready: true, restart count 1 May 4 16:22:49.673: INFO: node-exporter-wvppn started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:22:49.673: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:22:49.673: INFO: Container node-exporter ready: true, restart count 0 May 4 16:22:49.673: INFO: kube-apiserver-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.673: INFO: Container kube-apiserver ready: true, restart count 0 W0504 16:22:49.687417 21 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:22:49.710: INFO: Latency metrics for node master3 May 4 16:22:49.710: INFO: Logging node info for node node1 May 4 16:22:49.713: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 b8cf7e16-d5c7-4e2c-996a-93d93bd4fa1c 41432 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"e2:50:df:03:d2:13"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:54:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:22:44 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:22:44 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:22:44 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:22:44 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bffc023a4ab84df0b0181bc7b8f509e2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:dc08af72-abca-4f1d-bd0f-0e8d8eb97de5,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[@ :],SizeBytes:1002569035,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:371dc6bf7e0c7ce112a29341b000c40d840aef1dbb4fdcb3ae5c0597e28f3061 golang:alpine3.12],SizeBytes:301097267,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:22:49.714: INFO: Logging kubelet events for node node1 May 4 16:22:49.716: INFO: Logging pods the kubelet thinks is on node node1 May 4 16:22:49.731: INFO: kube-multus-ds-amd64-pkmbz started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.731: INFO: Container kube-multus ready: true, restart count 1 May 4 16:22:49.731: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.731: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 16:22:49.731: INFO: cmk-slg76 started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:22:49.731: INFO: Container nodereport ready: true, restart count 0 May 4 16:22:49.731: INFO: Container reconcile ready: true, restart count 0 May 4 16:22:49.731: INFO: prometheus-k8s-0 started at 2021-05-04 14:56:12 +0000 UTC (0+5 container statuses recorded) May 4 16:22:49.731: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:22:49.731: INFO: Container grafana ready: true, restart count 0 May 4 16:22:49.731: INFO: Container prometheus ready: true, restart count 1 May 4 16:22:49.731: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:22:49.731: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 4 16:22:49.731: INFO: pod-sharedvolume-86936850-dee1-46bf-8b03-52287eae813c started at 2021-05-04 16:22:19 +0000 UTC (0+2 container statuses recorded) May 4 16:22:49.731: INFO: Container busybox-main-container ready: false, restart count 0 May 4 16:22:49.731: INFO: Container busybox-sub-container ready: false, restart count 0 May 4 16:22:49.731: INFO: kube-flannel-d6pbl started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:22:49.732: INFO: Init container install-cni ready: true, restart count 2 May 4 16:22:49.732: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:22:49.732: INFO: node-feature-discovery-worker-wfgl5 started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.732: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:22:49.732: INFO: prometheus-operator-5bb8cb9d8f-rrrhf started at 2021-05-04 14:56:03 +0000 UTC (0+2 container statuses recorded) May 4 16:22:49.732: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:22:49.732: INFO: Container prometheus-operator ready: true, restart count 0 May 4 16:22:49.732: INFO: node-exporter-k8qd9 started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:22:49.732: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:22:49.732: INFO: Container node-exporter ready: true, restart count 0 May 4 16:22:49.732: INFO: test-webserver-9b461c4f-7d52-4db1-9027-4951689fb2b4 started at 2021-05-04 16:22:21 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.732: INFO: Container test-webserver ready: true, restart count 0 May 4 16:22:49.732: INFO: collectd-4755t started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:22:49.732: INFO: Container collectd ready: true, restart count 0 May 4 16:22:49.732: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:22:49.732: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:22:49.732: INFO: fail-once-local-ltx4r started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.732: INFO: Container c ready: false, restart count 0 May 4 16:22:49.732: INFO: pod-adoption started at 2021-05-04 16:21:10 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.732: INFO: Container pod-adoption ready: false, restart count 0 May 4 16:22:49.732: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.732: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:22:49.732: INFO: nginx-proxy-node1 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.732: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:22:49.732: INFO: kube-proxy-t2mbn started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.732: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:22:49.732: INFO: liveness-http started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.732: INFO: Container liveness-http ready: false, restart count 19 May 4 16:22:49.732: INFO: server-envvars-e2e8d4b8-6525-4f40-9a98-8cccf5c227b4 started at 2021-05-04 16:10:40 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.732: INFO: Container srv ready: true, restart count 0 May 4 16:22:49.732: INFO: cmk-init-discover-node1-m8vvw started at 2021-05-04 14:54:32 +0000 UTC (0+3 container statuses recorded) May 4 16:22:49.732: INFO: Container discover ready: false, restart count 0 May 4 16:22:49.732: INFO: Container init ready: false, restart count 0 May 4 16:22:49.732: INFO: Container install ready: false, restart count 0 May 4 16:22:49.732: INFO: client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49 started at 2021-05-04 16:20:49 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.732: INFO: Container env3cont ready: false, restart count 0 May 4 16:22:49.732: INFO: busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01 started at 2021-05-04 16:21:18 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.732: INFO: Container busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01 ready: false, restart count 0 W0504 16:22:49.743974 21 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:22:49.775: INFO: Latency metrics for node node1 May 4 16:22:49.775: INFO: Logging node info for node node2 May 4 16:22:49.778: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 41567fa7-bb24-4381-9387-e4115195037d 41425 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:98:f5:3b:98:5c"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:22:42 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:22:42 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:22:42 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:22:42 +0000 UTC,LastTransitionTime:2021-05-04 14:45:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d47a1c7ab17f44f2ae7ff788700a8d74,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:df3523a1-a74f-4f8b-beb1-29f5ed8699f3,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:22:49.779: INFO: Logging kubelet events for node node2 May 4 16:22:49.781: INFO: Logging pods the kubelet thinks is on node node2 May 4 16:22:49.794: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x started at 2021-05-04 14:59:02 +0000 UTC (0+2 container statuses recorded) May 4 16:22:49.794: INFO: Container tas-controller ready: true, restart count 0 May 4 16:22:49.794: INFO: Container tas-extender ready: true, restart count 0 May 4 16:22:49.794: INFO: liveness-exec started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.794: INFO: Container liveness-exec ready: false, restart count 6 May 4 16:22:49.794: INFO: kube-proxy-rfjjf started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.794: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:22:49.794: INFO: node-feature-discovery-worker-jzjqs started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.794: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:22:49.794: INFO: node-exporter-5lghf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:22:49.794: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:22:49.794: INFO: Container node-exporter ready: true, restart count 0 May 4 16:22:49.794: INFO: pod-init-485103d2-8ff5-4cc8-93a4-a2bc5ba380ee started at 2021-05-04 16:17:49 +0000 UTC (2+1 container statuses recorded) May 4 16:22:49.794: INFO: Init container init1 ready: false, restart count 0 May 4 16:22:49.794: INFO: Init container init2 ready: false, restart count 0 May 4 16:22:49.794: INFO: Container run1 ready: false, restart count 0 May 4 16:22:49.794: INFO: kube-multus-ds-amd64-7r2s4 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.794: INFO: Container kube-multus ready: true, restart count 1 May 4 16:22:49.794: INFO: termination-message-containere34b0019-bcb9-4fa0-9c1b-5eb0017d80c4 started at 2021-05-04 16:22:03 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.794: INFO: Container termination-message-container ready: false, restart count 0 May 4 16:22:49.794: INFO: cmk-init-discover-node2-zlxzj started at 2021-05-04 14:54:52 +0000 UTC (0+3 container statuses recorded) May 4 16:22:49.794: INFO: Container discover ready: false, restart count 0 May 4 16:22:49.794: INFO: Container init ready: false, restart count 0 May 4 16:22:49.794: INFO: Container install ready: false, restart count 0 May 4 16:22:49.794: INFO: collectd-dhwfp started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:22:49.794: INFO: Container collectd ready: true, restart count 0 May 4 16:22:49.794: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:22:49.794: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:22:49.794: INFO: fail-once-local-bkr6m started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.794: INFO: Container c ready: false, restart count 0 May 4 16:22:49.794: INFO: nginx-proxy-node2 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.794: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:22:49.794: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.794: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 4 16:22:49.794: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.794: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:22:49.794: INFO: cmk-2fmbx started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:22:49.794: INFO: Container nodereport ready: true, restart count 0 May 4 16:22:49.794: INFO: Container reconcile ready: true, restart count 0 May 4 16:22:49.794: INFO: busybox-fe2bb9a9-1bbd-4e3b-bdc3-65746a06d3c0 started at 2021-05-04 16:20:23 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.794: INFO: Container busybox ready: false, restart count 0 May 4 16:22:49.794: INFO: kube-flannel-lnwkk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:22:49.794: INFO: Init container install-cni ready: true, restart count 2 May 4 16:22:49.794: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:22:49.794: INFO: cmk-webhook-6c9d5f8578-fr595 started at 2021-05-04 14:55:15 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.794: INFO: Container cmk-webhook ready: true, restart count 0 May 4 16:22:49.794: INFO: ss-0 started at 2021-05-04 16:17:34 +0000 UTC (0+1 container statuses recorded) May 4 16:22:49.794: INFO: Container webserver ready: false, restart count 0 W0504 16:22:49.809146 21 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:22:49.853: INFO: Latency metrics for node node2 May 4 16:22:49.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4063" for this suite. • Failure [300.354 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:22:49.537: Expected <*errors.errorString | 0xc0002c4200>: { s: "timed out waiting for the condition", } to be nil /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:225 ------------------------------ {"msg":"FAILED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":24,"skipped":433,"failed":2,"failures":["[sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","[k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]"]} SSSSSS ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:20:23.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-fe2bb9a9-1bbd-4e3b-bdc3-65746a06d3c0 in namespace container-probe-9817 May 4 16:25:23.676: FAIL: starting pod busybox-fe2bb9a9-1bbd-4e3b-bdc3-65746a06d3c0 in namespace container-probe-9817 Unexpected error: <*errors.errorString | 0xc0003001f0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/common.RunLivenessTest(0xc00059e580, 0xc001985000, 0x0, 0x37e11d6000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:426 +0x4b9 k8s.io/kubernetes/test/e2e/common.glob..func3.5() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:146 +0x197 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000179e00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc000179e00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc000179e00, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "container-probe-9817". STEP: Found 7 events. May 4 16:25:23.689: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for busybox-fe2bb9a9-1bbd-4e3b-bdc3-65746a06d3c0: { } Scheduled: Successfully assigned container-probe-9817/busybox-fe2bb9a9-1bbd-4e3b-bdc3-65746a06d3c0 to node2 May 4 16:25:23.689: INFO: At 2021-05-04 16:20:25 +0000 UTC - event for busybox-fe2bb9a9-1bbd-4e3b-bdc3-65746a06d3c0: {multus } AddedInterface: Add eth0 [10.244.3.234/24] May 4 16:25:23.689: INFO: At 2021-05-04 16:20:25 +0000 UTC - event for busybox-fe2bb9a9-1bbd-4e3b-bdc3-65746a06d3c0: {kubelet node2} Pulling: Pulling image "docker.io/library/busybox:1.29" May 4 16:25:23.689: INFO: At 2021-05-04 16:20:26 +0000 UTC - event for busybox-fe2bb9a9-1bbd-4e3b-bdc3-65746a06d3c0: {kubelet node2} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 4 16:25:23.689: INFO: At 2021-05-04 16:20:26 +0000 UTC - event for busybox-fe2bb9a9-1bbd-4e3b-bdc3-65746a06d3c0: {kubelet node2} Failed: Error: ErrImagePull May 4 16:25:23.689: INFO: At 2021-05-04 16:20:26 +0000 UTC - event for busybox-fe2bb9a9-1bbd-4e3b-bdc3-65746a06d3c0: {kubelet node2} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 4 16:25:23.689: INFO: At 2021-05-04 16:20:26 +0000 UTC - event for busybox-fe2bb9a9-1bbd-4e3b-bdc3-65746a06d3c0: {kubelet node2} Failed: Error: ImagePullBackOff May 4 16:25:23.691: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:25:23.691: INFO: May 4 16:25:23.695: INFO: Logging node info for node master1 May 4 16:25:23.698: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 db982204-549e-4532-90a7-a4410878cfc9 42102 0 2021-05-04 14:43:01 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"3e:f0:43:cb:66:52"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-04 14:51:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:46 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:25:20 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:25:20 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:25:20 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:25:20 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:88a0771919594d4187f6704fc7592bf8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8e0a253b-2aa4-4467-879e-567e7ba1ffa4,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:25:23.699: INFO: Logging kubelet events for node master1 May 4 16:25:23.701: INFO: Logging pods the kubelet thinks is on node master1 May 4 16:25:23.716: INFO: kube-flannel-qspzk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:25:23.716: INFO: Init container install-cni ready: true, restart count 0 May 4 16:25:23.716: INFO: Container kube-flannel ready: true, restart count 3 May 4 16:25:23.716: INFO: kube-multus-ds-amd64-jflvf started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.716: INFO: Container kube-multus ready: true, restart count 1 May 4 16:25:23.716: INFO: coredns-7677f9bb54-qvcd2 started at 2021-05-04 14:46:11 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.716: INFO: Container coredns ready: true, restart count 1 May 4 16:25:23.716: INFO: node-feature-discovery-controller-5bf5c49849-72rn6 started at 2021-05-04 14:51:52 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.716: INFO: Container nfd-controller ready: true, restart count 0 May 4 16:25:23.716: INFO: kube-apiserver-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.716: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:25:23.716: INFO: kube-controller-manager-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.716: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:25:23.716: INFO: kube-proxy-8j6ch started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.716: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:25:23.716: INFO: docker-registry-docker-registry-56cbc7bc58-zhf8t started at 2021-05-04 14:48:42 +0000 UTC (0+2 container statuses recorded) May 4 16:25:23.716: INFO: Container docker-registry ready: true, restart count 0 May 4 16:25:23.716: INFO: Container nginx ready: true, restart count 0 May 4 16:25:23.716: INFO: node-exporter-jckjs started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:25:23.716: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:25:23.716: INFO: Container node-exporter ready: true, restart count 0 May 4 16:25:23.716: INFO: kube-scheduler-master1 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.716: INFO: Container kube-scheduler ready: true, restart count 0 W0504 16:25:23.727682 34 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:25:23.759: INFO: Latency metrics for node master1 May 4 16:25:23.759: INFO: Logging node info for node master2 May 4 16:25:23.762: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 e2c15170-247b-4e7b-b818-abc807948bf8 42095 0 2021-05-04 14:43:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:e0:10:a0:e0:62"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:32 +0000 UTC,LastTransitionTime:2021-05-04 14:47:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:25:19 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:25:19 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:25:19 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:25:19 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6af568f56589422a9bd68e0270ce0f8c,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:bf27bb77-fad2-4b52-85c3-acb5113fc512,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:25:23.762: INFO: Logging kubelet events for node master2 May 4 16:25:23.764: INFO: Logging pods the kubelet thinks is on node master2 May 4 16:25:23.778: INFO: kube-proxy-6b5t8 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.778: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:25:23.778: INFO: kube-flannel-cxdfr started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:25:23.778: INFO: Init container install-cni ready: true, restart count 0 May 4 16:25:23.778: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:25:23.778: INFO: kube-multus-ds-amd64-dw8tg started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.778: INFO: Container kube-multus ready: true, restart count 1 May 4 16:25:23.778: INFO: dns-autoscaler-5b7b5c9b6f-zbrsq started at 2021-05-04 14:46:08 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.778: INFO: Container autoscaler ready: true, restart count 1 May 4 16:25:23.778: INFO: node-exporter-9c6qf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:25:23.778: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:25:23.778: INFO: Container node-exporter ready: true, restart count 0 May 4 16:25:23.778: INFO: kube-apiserver-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.778: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:25:23.778: INFO: kube-controller-manager-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.778: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:25:23.778: INFO: kube-scheduler-master2 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.778: INFO: Container kube-scheduler ready: true, restart count 2 W0504 16:25:23.791718 34 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:25:23.814: INFO: Latency metrics for node master2 May 4 16:25:23.814: INFO: Logging node info for node master3 May 4 16:25:23.817: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 b533a646-667e-403c-944b-71dec9cc4851 42093 0 2021-05-04 14:43:51 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:b0:53:14:f6:c9"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:48:50 +0000 UTC,LastTransitionTime:2021-05-04 14:48:50 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:25:19 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:25:19 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:25:19 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:25:19 +0000 UTC,LastTransitionTime:2021-05-04 14:45:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:959373dcda56494486f0c2bb0bb496cc,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:0714ca81-c21e-40d6-a288-48d597238e54,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:25:23.818: INFO: Logging kubelet events for node master3 May 4 16:25:23.820: INFO: Logging pods the kubelet thinks is on node master3 May 4 16:25:23.835: INFO: kube-scheduler-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.835: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:25:23.835: INFO: kube-proxy-2p5b6 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.835: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:25:23.835: INFO: kube-flannel-wznt8 started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:25:23.835: INFO: Init container install-cni ready: true, restart count 0 May 4 16:25:23.835: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:25:23.835: INFO: kube-multus-ds-amd64-cgwz2 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.835: INFO: Container kube-multus ready: true, restart count 1 May 4 16:25:23.835: INFO: coredns-7677f9bb54-pshfb started at 2021-05-04 14:46:06 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.835: INFO: Container coredns ready: true, restart count 1 May 4 16:25:23.835: INFO: node-exporter-wvppn started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:25:23.835: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:25:23.835: INFO: Container node-exporter ready: true, restart count 0 May 4 16:25:23.835: INFO: kube-apiserver-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.835: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:25:23.835: INFO: kube-controller-manager-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.835: INFO: Container kube-controller-manager ready: true, restart count 2 W0504 16:25:23.847281 34 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:25:23.875: INFO: Latency metrics for node master3 May 4 16:25:23.875: INFO: Logging node info for node node1 May 4 16:25:23.878: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 b8cf7e16-d5c7-4e2c-996a-93d93bd4fa1c 42082 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"e2:50:df:03:d2:13"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:54:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:25:15 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:25:15 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:25:15 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:25:15 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bffc023a4ab84df0b0181bc7b8f509e2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:dc08af72-abca-4f1d-bd0f-0e8d8eb97de5,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[@ :],SizeBytes:1002569035,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:371dc6bf7e0c7ce112a29341b000c40d840aef1dbb4fdcb3ae5c0597e28f3061 golang:alpine3.12],SizeBytes:301097267,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:25:23.879: INFO: Logging kubelet events for node node1 May 4 16:25:23.880: INFO: Logging pods the kubelet thinks is on node node1 May 4 16:25:23.904: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.904: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:25:23.904: INFO: nginx-proxy-node1 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.904: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:25:23.904: INFO: kube-proxy-t2mbn started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.904: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:25:23.904: INFO: liveness-http started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.904: INFO: Container liveness-http ready: false, restart count 19 May 4 16:25:23.904: INFO: server-envvars-e2e8d4b8-6525-4f40-9a98-8cccf5c227b4 started at 2021-05-04 16:10:40 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.904: INFO: Container srv ready: true, restart count 0 May 4 16:25:23.904: INFO: cmk-init-discover-node1-m8vvw started at 2021-05-04 14:54:32 +0000 UTC (0+3 container statuses recorded) May 4 16:25:23.904: INFO: Container discover ready: false, restart count 0 May 4 16:25:23.904: INFO: Container init ready: false, restart count 0 May 4 16:25:23.904: INFO: Container install ready: false, restart count 0 May 4 16:25:23.904: INFO: client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49 started at 2021-05-04 16:20:49 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.904: INFO: Container env3cont ready: false, restart count 0 May 4 16:25:23.904: INFO: busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01 started at 2021-05-04 16:21:18 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.904: INFO: Container busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01 ready: false, restart count 0 May 4 16:25:23.904: INFO: kube-multus-ds-amd64-pkmbz started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.904: INFO: Container kube-multus ready: true, restart count 1 May 4 16:25:23.904: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.904: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 16:25:23.904: INFO: cmk-slg76 started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:25:23.904: INFO: Container nodereport ready: true, restart count 0 May 4 16:25:23.904: INFO: Container reconcile ready: true, restart count 0 May 4 16:25:23.904: INFO: prometheus-k8s-0 started at 2021-05-04 14:56:12 +0000 UTC (0+5 container statuses recorded) May 4 16:25:23.904: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:25:23.904: INFO: Container grafana ready: true, restart count 0 May 4 16:25:23.904: INFO: Container prometheus ready: true, restart count 1 May 4 16:25:23.904: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:25:23.904: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 4 16:25:23.904: INFO: pod-sharedvolume-86936850-dee1-46bf-8b03-52287eae813c started at 2021-05-04 16:22:19 +0000 UTC (0+2 container statuses recorded) May 4 16:25:23.904: INFO: Container busybox-main-container ready: false, restart count 0 May 4 16:25:23.904: INFO: Container busybox-sub-container ready: false, restart count 0 May 4 16:25:23.904: INFO: test-webserver-9b461c4f-7d52-4db1-9027-4951689fb2b4 started at 2021-05-04 16:22:21 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.904: INFO: Container test-webserver ready: true, restart count 0 May 4 16:25:23.904: INFO: kube-flannel-d6pbl started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:25:23.904: INFO: Init container install-cni ready: true, restart count 2 May 4 16:25:23.904: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:25:23.904: INFO: node-feature-discovery-worker-wfgl5 started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.904: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:25:23.904: INFO: prometheus-operator-5bb8cb9d8f-rrrhf started at 2021-05-04 14:56:03 +0000 UTC (0+2 container statuses recorded) May 4 16:25:23.904: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:25:23.904: INFO: Container prometheus-operator ready: true, restart count 0 May 4 16:25:23.904: INFO: node-exporter-k8qd9 started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:25:23.904: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:25:23.904: INFO: Container node-exporter ready: true, restart count 0 May 4 16:25:23.904: INFO: collectd-4755t started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:25:23.904: INFO: Container collectd ready: true, restart count 0 May 4 16:25:23.904: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:25:23.904: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:25:23.904: INFO: fail-once-local-ltx4r started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.904: INFO: Container c ready: false, restart count 0 May 4 16:25:23.905: INFO: pod-adoption started at 2021-05-04 16:21:10 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.905: INFO: Container pod-adoption ready: false, restart count 0 W0504 16:25:23.918516 34 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:25:23.969: INFO: Latency metrics for node node1 May 4 16:25:23.969: INFO: Logging node info for node node2 May 4 16:25:23.972: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 41567fa7-bb24-4381-9387-e4115195037d 42110 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:98:f5:3b:98:5c"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:25:23 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:25:23 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:25:23 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:25:23 +0000 UTC,LastTransitionTime:2021-05-04 14:45:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d47a1c7ab17f44f2ae7ff788700a8d74,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:df3523a1-a74f-4f8b-beb1-29f5ed8699f3,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:25:23.972: INFO: Logging kubelet events for node node2 May 4 16:25:23.975: INFO: Logging pods the kubelet thinks is on node node2 May 4 16:25:23.995: INFO: cmk-webhook-6c9d5f8578-fr595 started at 2021-05-04 14:55:15 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.995: INFO: Container cmk-webhook ready: true, restart count 0 May 4 16:25:23.995: INFO: ss-0 started at 2021-05-04 16:17:34 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.995: INFO: Container webserver ready: false, restart count 0 May 4 16:25:23.995: INFO: kube-flannel-lnwkk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:25:23.995: INFO: Init container install-cni ready: true, restart count 2 May 4 16:25:23.995: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:25:23.995: INFO: node-feature-discovery-worker-jzjqs started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.995: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:25:23.995: INFO: node-exporter-5lghf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:25:23.995: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:25:23.995: INFO: Container node-exporter ready: true, restart count 0 May 4 16:25:23.995: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x started at 2021-05-04 14:59:02 +0000 UTC (0+2 container statuses recorded) May 4 16:25:23.995: INFO: Container tas-controller ready: true, restart count 0 May 4 16:25:23.995: INFO: Container tas-extender ready: true, restart count 0 May 4 16:25:23.995: INFO: liveness-exec started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.995: INFO: Container liveness-exec ready: false, restart count 6 May 4 16:25:23.995: INFO: kube-proxy-rfjjf started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.995: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:25:23.995: INFO: kube-multus-ds-amd64-7r2s4 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.995: INFO: Container kube-multus ready: true, restart count 1 May 4 16:25:23.995: INFO: collectd-dhwfp started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:25:23.995: INFO: Container collectd ready: true, restart count 0 May 4 16:25:23.995: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:25:23.995: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:25:23.995: INFO: fail-once-local-bkr6m started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.995: INFO: Container c ready: false, restart count 0 May 4 16:25:23.995: INFO: termination-message-containere34b0019-bcb9-4fa0-9c1b-5eb0017d80c4 started at 2021-05-04 16:22:03 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.995: INFO: Container termination-message-container ready: false, restart count 0 May 4 16:25:23.995: INFO: cmk-init-discover-node2-zlxzj started at 2021-05-04 14:54:52 +0000 UTC (0+3 container statuses recorded) May 4 16:25:23.995: INFO: Container discover ready: false, restart count 0 May 4 16:25:23.995: INFO: Container init ready: false, restart count 0 May 4 16:25:23.995: INFO: Container install ready: false, restart count 0 May 4 16:25:23.995: INFO: nginx-proxy-node2 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.995: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:25:23.995: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.995: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:25:23.995: INFO: cmk-2fmbx started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:25:23.995: INFO: Container nodereport ready: true, restart count 0 May 4 16:25:23.995: INFO: Container reconcile ready: true, restart count 0 May 4 16:25:23.995: INFO: pod-exec-websocket-2863438c-c2df-4c3c-9cd1-2b53e8002946 started at 2021-05-04 16:22:49 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.995: INFO: Container main ready: false, restart count 0 May 4 16:25:23.995: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:25:23.995: INFO: Container kubernetes-dashboard ready: true, restart count 1 W0504 16:25:24.009825 34 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:25:24.048: INFO: Latency metrics for node node2 May 4 16:25:24.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9817" for this suite. • Failure [300.430 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:25:23.676: starting pod busybox-fe2bb9a9-1bbd-4e3b-bdc3-65746a06d3c0 in namespace container-probe-9817 Unexpected error: <*errors.errorString | 0xc0003001f0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:426 ------------------------------ {"msg":"FAILED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":188,"failed":4,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","[sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","[k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:10:40.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:10:44.190: INFO: Waiting up to 5m0s for pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49" in namespace "pods-2302" to be "Succeeded or Failed" May 4 16:10:44.192: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.387737ms May 4 16:10:46.195: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005048639s May 4 16:10:48.198: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008330081s May 4 16:10:50.203: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01292739s May 4 16:10:52.207: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 8.01688963s May 4 16:10:54.211: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 10.021554297s May 4 16:10:56.215: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 12.024915137s May 4 16:10:58.218: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 14.027970856s May 4 16:11:00.221: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 16.031271402s May 4 16:11:02.224: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 18.034560075s May 4 16:11:04.230: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 20.040597735s May 4 16:11:06.233: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 22.043630728s May 4 16:11:08.237: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 24.047617964s May 4 16:11:10.241: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 26.050933979s May 4 16:11:12.246: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 28.055839793s May 4 16:11:14.250: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 30.059768635s May 4 16:11:16.253: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 32.062840352s May 4 16:11:18.257: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 34.067105498s May 4 16:11:20.260: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 36.070155248s May 4 16:11:22.263: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 38.072693521s May 4 16:11:24.265: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 40.075599917s May 4 16:11:26.269: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 42.07914466s May 4 16:11:28.273: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 44.082830404s May 4 16:11:30.277: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 46.086680807s May 4 16:11:32.279: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 48.089122005s May 4 16:11:34.283: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 50.09270859s May 4 16:11:36.286: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 52.096635192s May 4 16:11:38.289: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 54.099058174s May 4 16:11:40.292: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 56.102372708s May 4 16:11:42.295: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 58.104707357s May 4 16:11:44.298: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.108164815s May 4 16:11:46.301: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.111079673s May 4 16:11:48.304: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.114158777s May 4 16:11:50.307: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.117332077s May 4 16:11:52.310: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.120166469s May 4 16:11:54.314: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.12425816s May 4 16:11:56.318: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.12765473s May 4 16:11:58.322: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.132620748s May 4 16:12:00.326: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.136250965s May 4 16:12:02.329: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.139538752s May 4 16:12:04.336: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.145906521s May 4 16:12:06.340: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.150502445s May 4 16:12:08.344: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.154579048s May 4 16:12:10.348: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.15859473s May 4 16:12:12.351: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.161278768s May 4 16:12:14.355: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.165502642s May 4 16:12:16.359: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.169294447s May 4 16:12:18.362: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.172582128s May 4 16:12:20.367: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.177632595s May 4 16:12:22.371: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.181363272s May 4 16:12:24.375: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.18502512s May 4 16:12:26.378: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.188164789s May 4 16:12:28.382: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.191772996s May 4 16:12:30.387: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.196949124s May 4 16:12:32.390: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.200486048s May 4 16:12:34.394: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.204426236s May 4 16:12:36.399: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.208931419s May 4 16:12:38.404: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.213679041s May 4 16:12:40.409: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.219369851s May 4 16:12:42.412: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.222533904s May 4 16:12:44.415: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.22530163s May 4 16:12:46.418: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.228198071s May 4 16:12:48.422: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.231944139s May 4 16:12:50.424: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.234530682s May 4 16:12:52.427: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.237340984s May 4 16:12:54.430: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.240104216s May 4 16:12:56.433: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.243209422s May 4 16:12:58.436: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.245772999s May 4 16:13:00.439: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.249389394s May 4 16:13:02.442: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.252588877s May 4 16:13:04.446: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.256164321s May 4 16:13:06.449: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.258966285s May 4 16:13:08.453: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.262705925s May 4 16:13:10.457: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.267198143s May 4 16:13:12.460: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.270464458s May 4 16:13:14.464: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.274102883s May 4 16:13:16.467: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.277574814s May 4 16:13:18.471: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.281544017s May 4 16:13:20.475: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.284757306s May 4 16:13:22.478: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.287724425s May 4 16:13:24.480: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.2903342s May 4 16:13:26.483: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.293630885s May 4 16:13:28.487: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.296702802s May 4 16:13:30.490: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.30052558s May 4 16:13:32.494: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.303966548s May 4 16:13:34.498: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.307719687s May 4 16:13:36.501: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.311542052s May 4 16:13:38.504: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.314521164s May 4 16:13:40.507: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.317596121s May 4 16:13:42.510: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.320254572s May 4 16:13:44.513: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.32292295s May 4 16:13:46.515: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.325405239s May 4 16:13:48.518: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.327938597s May 4 16:13:50.522: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.332445539s May 4 16:13:52.526: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.336638233s May 4 16:13:54.530: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.340038397s May 4 16:13:56.534: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.343953682s May 4 16:13:58.538: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.348589509s May 4 16:14:00.543: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.352894856s May 4 16:14:02.547: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.357359982s May 4 16:14:04.552: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.36232504s May 4 16:14:06.556: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.365722217s May 4 16:14:08.561: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.371027721s May 4 16:14:10.566: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.375922743s May 4 16:14:12.569: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.37915381s May 4 16:14:14.573: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.38268475s May 4 16:14:16.577: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.386722486s May 4 16:14:18.582: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.392138653s May 4 16:14:20.586: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.396499412s May 4 16:14:22.589: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.399024185s May 4 16:14:24.592: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.401975335s May 4 16:14:26.597: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.407583303s May 4 16:14:28.602: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.412315431s May 4 16:14:30.607: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.416835334s May 4 16:14:32.610: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.420140068s May 4 16:14:34.613: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.423573141s May 4 16:14:36.617: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.42732293s May 4 16:14:38.621: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.43140031s May 4 16:14:40.625: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.435439817s May 4 16:14:42.628: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.438371798s May 4 16:14:44.632: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.441810606s May 4 16:14:46.636: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.446286891s May 4 16:14:48.640: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.450469377s May 4 16:14:50.644: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.454210918s May 4 16:14:52.648: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.457954378s May 4 16:14:54.651: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.460816818s May 4 16:14:56.655: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.465070873s May 4 16:14:58.658: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.468267874s May 4 16:15:00.663: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.472908465s May 4 16:15:02.667: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.477504538s May 4 16:15:04.671: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.481066686s May 4 16:15:06.674: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.483733149s May 4 16:15:08.678: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.488496996s May 4 16:15:10.683: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.492844859s May 4 16:15:12.687: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.497194266s May 4 16:15:14.691: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.500834246s May 4 16:15:16.695: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.504800727s May 4 16:15:18.698: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.508214412s May 4 16:15:20.703: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.513036802s May 4 16:15:22.708: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.518360261s May 4 16:15:24.711: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.521553626s May 4 16:15:26.715: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.525224056s May 4 16:15:28.717: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.527625437s May 4 16:15:30.720: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.530394579s May 4 16:15:32.723: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.53362351s May 4 16:15:34.727: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.536867334s May 4 16:15:36.729: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.539616993s May 4 16:15:38.734: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.543730357s May 4 16:15:40.737: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.547375917s May 4 16:15:42.742: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.552555981s May 4 16:15:44.750: INFO: Failed to get logs from node "node1" pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49" container "env3cont": the server rejected our request for an unknown reason (get pods client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49) STEP: delete the pod May 4 16:15:44.754: INFO: Waiting for pod client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49 to disappear May 4 16:15:44.757: INFO: Pod client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49 still exists May 4 16:15:46.757: INFO: Waiting for pod client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49 to disappear May 4 16:15:46.760: INFO: Pod client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49 no longer exists May 4 16:15:46.760: INFO: (Attempt 1 of 3) Unexpected error occurred: expected pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49" success: Gave up after waiting 5m0s for pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49" to be "Succeeded or Failed" May 4 16:15:46.776: INFO: Waiting up to 5m0s for pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49" in namespace "pods-2302" to be "Succeeded or Failed" May 4 16:15:46.778: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1.835657ms May 4 16:15:48.781: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005026733s May 4 16:15:50.785: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008454957s May 4 16:15:52.788: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011820889s May 4 16:15:54.791: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 8.014849549s May 4 16:15:56.796: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 10.019473952s May 4 16:15:58.799: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 12.023061432s May 4 16:16:00.804: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 14.027557964s May 4 16:16:02.808: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 16.031860161s May 4 16:16:04.811: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 18.035337128s May 4 16:16:06.815: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 20.039167428s May 4 16:16:08.818: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 22.042323917s May 4 16:16:10.822: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 24.045630093s May 4 16:16:12.826: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 26.049410161s May 4 16:16:14.828: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 28.052112349s May 4 16:16:16.832: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 30.055464809s May 4 16:16:18.836: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 32.059704326s May 4 16:16:20.839: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 34.062856973s May 4 16:16:22.842: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 36.066274293s May 4 16:16:24.846: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 38.069810421s May 4 16:16:26.849: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 40.073345881s May 4 16:16:28.854: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 42.077408007s May 4 16:16:30.857: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 44.080744669s May 4 16:16:32.860: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 46.084308543s May 4 16:16:34.865: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 48.088786704s May 4 16:16:36.869: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 50.092636957s May 4 16:16:38.873: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 52.097150136s May 4 16:16:40.876: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 54.09966338s May 4 16:16:42.879: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 56.102480627s May 4 16:16:44.881: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 58.10526284s May 4 16:16:46.886: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.109958239s May 4 16:16:48.890: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.113518074s May 4 16:16:50.893: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.116940863s May 4 16:16:52.897: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.120374958s May 4 16:16:54.900: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.123404884s May 4 16:16:56.902: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.126191732s May 4 16:16:58.905: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.128717411s May 4 16:17:00.907: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.131160185s May 4 16:17:02.911: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.134829878s May 4 16:17:04.915: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.138894601s May 4 16:17:06.918: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.142202847s May 4 16:17:08.922: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.145770341s May 4 16:17:10.926: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.14970037s May 4 16:17:12.930: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.153827539s May 4 16:17:14.935: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.15861431s May 4 16:17:16.938: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.161932902s May 4 16:17:18.942: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.166239738s May 4 16:17:20.946: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.169381617s May 4 16:17:22.949: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.173053531s May 4 16:17:24.955: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.178506724s May 4 16:17:26.958: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.182066155s May 4 16:17:28.962: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.186141678s May 4 16:17:30.966: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.189436731s May 4 16:17:32.970: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.193636904s May 4 16:17:34.975: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.198464865s May 4 16:17:36.978: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.201762981s May 4 16:17:38.982: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.205763674s May 4 16:17:40.985: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.208575129s May 4 16:17:42.988: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.212121117s May 4 16:17:44.993: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.216627023s May 4 16:17:46.996: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.219931491s May 4 16:17:49.000: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.223647711s May 4 16:17:51.004: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.227476047s May 4 16:17:53.008: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.232292879s May 4 16:17:55.012: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.236094584s May 4 16:17:57.018: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.241923389s May 4 16:17:59.022: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.246247959s May 4 16:18:01.025: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.249146435s May 4 16:18:03.030: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.254293974s May 4 16:18:05.039: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.262491524s May 4 16:18:07.043: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.266773286s May 4 16:18:09.048: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.271664138s May 4 16:18:11.052: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.276130928s May 4 16:18:13.055: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.279061412s May 4 16:18:15.058: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.282339094s May 4 16:18:17.062: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.285601705s May 4 16:18:19.065: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.289113197s May 4 16:18:21.068: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.29233631s May 4 16:18:23.072: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.296100434s May 4 16:18:25.075: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.299022812s May 4 16:18:27.079: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.303130001s May 4 16:18:29.082: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.30631831s May 4 16:18:31.087: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.310709419s May 4 16:18:33.091: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.314456553s May 4 16:18:35.094: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.318108398s May 4 16:18:37.097: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.320937618s May 4 16:18:39.100: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.32404526s May 4 16:18:41.103: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.326640659s May 4 16:18:43.106: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.329637348s May 4 16:18:45.110: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.333577382s May 4 16:18:47.114: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.337647345s May 4 16:18:49.117: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.341316532s May 4 16:18:51.121: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.344744729s May 4 16:18:53.124: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.34739326s May 4 16:18:55.126: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.350238621s May 4 16:18:57.131: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.355247669s May 4 16:18:59.135: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.359090125s May 4 16:19:01.139: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.363033458s May 4 16:19:03.143: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.367035304s May 4 16:19:05.148: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.372312379s May 4 16:19:07.153: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.377362627s May 4 16:19:09.156: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.379695785s May 4 16:19:11.159: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.382773254s May 4 16:19:13.162: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.386316986s May 4 16:19:15.166: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.389778748s May 4 16:19:17.169: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.39247196s May 4 16:19:19.172: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.396337189s May 4 16:19:21.175: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.399006966s May 4 16:19:23.179: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.402445564s May 4 16:19:25.182: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.405460729s May 4 16:19:27.185: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.409102635s May 4 16:19:29.188: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.411565405s May 4 16:19:31.192: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.415755748s May 4 16:19:33.195: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.418373695s May 4 16:19:35.198: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.421603651s May 4 16:19:37.202: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.425641397s May 4 16:19:39.204: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.428359815s May 4 16:19:41.208: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.431531578s May 4 16:19:43.211: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.434976551s May 4 16:19:45.215: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.439204503s May 4 16:19:47.218: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.441578552s May 4 16:19:49.225: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.449060294s May 4 16:19:51.229: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.452758171s May 4 16:19:53.232: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.456370017s May 4 16:19:55.239: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.462538761s May 4 16:19:57.245: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.469040256s May 4 16:19:59.249: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.472415135s May 4 16:20:01.252: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.475466656s May 4 16:20:03.255: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.479320127s May 4 16:20:05.260: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.483693837s May 4 16:20:07.263: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.487231771s May 4 16:20:09.266: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.490019564s May 4 16:20:11.269: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.492796351s May 4 16:20:13.271: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.495145335s May 4 16:20:15.275: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.498524047s May 4 16:20:17.278: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.501573201s May 4 16:20:19.282: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.505594181s May 4 16:20:21.284: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.508352419s May 4 16:20:23.287: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.511283496s May 4 16:20:25.291: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.514698773s May 4 16:20:27.293: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.517289094s May 4 16:20:29.296: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.520221689s May 4 16:20:31.300: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.523449923s May 4 16:20:33.302: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.526261268s May 4 16:20:35.305: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.529262831s May 4 16:20:37.309: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.532963329s May 4 16:20:39.313: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.536438042s May 4 16:20:41.316: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.539568908s May 4 16:20:43.319: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.543022667s May 4 16:20:45.323: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.546617957s May 4 16:20:47.332: INFO: Failed to get logs from node "node2" pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49" container "env3cont": the server rejected our request for an unknown reason (get pods client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49) STEP: delete the pod May 4 16:20:47.339: INFO: Waiting for pod client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49 to disappear May 4 16:20:47.341: INFO: Pod client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49 still exists May 4 16:20:49.342: INFO: Waiting for pod client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49 to disappear May 4 16:20:49.344: INFO: Pod client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49 no longer exists May 4 16:20:49.344: INFO: (Attempt 2 of 3) Unexpected error occurred: expected pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49" success: Gave up after waiting 5m0s for pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49" to be "Succeeded or Failed" May 4 16:20:49.356: INFO: Waiting up to 5m0s for pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49" in namespace "pods-2302" to be "Succeeded or Failed" May 4 16:20:49.358: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1.796197ms May 4 16:20:51.362: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005555726s May 4 16:20:53.366: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009742011s May 4 16:20:55.369: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012336288s May 4 16:20:57.373: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016026391s May 4 16:20:59.376: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 10.019780226s May 4 16:21:01.379: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 12.022664708s May 4 16:21:03.383: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 14.026172542s May 4 16:21:05.387: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 16.030475438s May 4 16:21:07.391: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 18.034522314s May 4 16:21:09.394: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 20.037862353s May 4 16:21:11.398: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 22.041444928s May 4 16:21:13.402: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 24.045225223s May 4 16:21:15.404: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 26.047807644s May 4 16:21:17.407: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 28.050976487s May 4 16:21:19.410: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 30.053227357s May 4 16:21:21.412: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 32.055737601s May 4 16:21:23.417: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 34.060584834s May 4 16:21:25.420: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 36.063818944s May 4 16:21:27.424: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 38.067731095s May 4 16:21:29.428: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 40.071598153s May 4 16:21:31.432: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 42.075034612s May 4 16:21:33.435: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 44.078246096s May 4 16:21:35.438: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 46.081608055s May 4 16:21:37.441: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 48.084681733s May 4 16:21:39.445: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 50.088150488s May 4 16:21:41.447: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 52.090931058s May 4 16:21:43.450: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 54.093964716s May 4 16:21:45.453: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 56.096831449s May 4 16:21:47.457: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 58.100298272s May 4 16:21:49.460: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.103630661s May 4 16:21:51.465: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.108547157s May 4 16:21:53.470: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.113647337s May 4 16:21:55.475: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.118962603s May 4 16:21:57.479: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.12284588s May 4 16:21:59.484: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.127640541s May 4 16:22:01.487: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.130292308s May 4 16:22:03.492: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.135845336s May 4 16:22:05.496: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.139739371s May 4 16:22:07.500: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.143930979s May 4 16:22:09.505: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.1481924s May 4 16:22:11.508: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.151890847s May 4 16:22:13.513: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.156516911s May 4 16:22:15.516: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.159406197s May 4 16:22:17.518: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.161902329s May 4 16:22:19.521: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.164613992s May 4 16:22:21.525: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.168246673s May 4 16:22:23.529: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.172602702s May 4 16:22:25.533: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.176266304s May 4 16:22:27.536: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.179241045s May 4 16:22:29.539: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.18286735s May 4 16:22:31.543: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.186987483s May 4 16:22:33.547: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.1902214s May 4 16:22:35.550: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.193386449s May 4 16:22:37.553: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.19639727s May 4 16:22:39.556: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.199351157s May 4 16:22:41.559: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.202403336s May 4 16:22:43.562: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.205406523s May 4 16:22:45.565: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.208082268s May 4 16:22:47.568: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.211188938s May 4 16:22:49.571: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.214432801s May 4 16:22:51.575: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.218139448s May 4 16:22:53.579: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.222333048s May 4 16:22:55.582: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.225497302s May 4 16:22:57.586: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.229603116s May 4 16:22:59.589: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.232921493s May 4 16:23:01.593: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.236750713s May 4 16:23:03.596: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.239742147s May 4 16:23:05.600: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.243239875s May 4 16:23:07.603: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.246494059s May 4 16:23:09.606: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.249879762s May 4 16:23:11.610: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.253873574s May 4 16:23:13.613: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.256866255s May 4 16:23:15.616: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.25993074s May 4 16:23:17.620: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.26333395s May 4 16:23:19.623: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.26689884s May 4 16:23:21.626: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.269724618s May 4 16:23:23.630: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.273442909s May 4 16:23:25.634: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.277881534s May 4 16:23:27.637: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.280589047s May 4 16:23:29.640: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.283215785s May 4 16:23:31.643: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.286019766s May 4 16:23:33.645: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.288838124s May 4 16:23:35.649: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.292436002s May 4 16:23:37.653: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.296007066s May 4 16:23:39.655: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.298896131s May 4 16:23:41.659: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.30226912s May 4 16:23:43.663: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.306223975s May 4 16:23:45.666: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.309127024s May 4 16:23:47.670: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.313409924s May 4 16:23:49.673: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.316735968s May 4 16:23:51.676: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.319201997s May 4 16:23:53.679: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.322790157s May 4 16:23:55.684: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.327174201s May 4 16:23:57.688: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.331024421s May 4 16:23:59.691: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.334802798s May 4 16:24:01.694: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.33713604s May 4 16:24:03.697: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.340701704s May 4 16:24:05.700: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.343522975s May 4 16:24:07.703: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.346965214s May 4 16:24:09.706: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.34936454s May 4 16:24:11.709: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.352128106s May 4 16:24:13.711: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.354941099s May 4 16:24:15.715: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.358236872s May 4 16:24:17.718: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.361715158s May 4 16:24:19.723: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.36609709s May 4 16:24:21.725: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.369001164s May 4 16:24:23.730: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.373840001s May 4 16:24:25.734: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.37711331s May 4 16:24:27.736: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.379558979s May 4 16:24:29.739: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.382964113s May 4 16:24:31.742: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.385935629s May 4 16:24:33.747: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.390041799s May 4 16:24:35.751: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.394115706s May 4 16:24:37.754: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.397123922s May 4 16:24:39.757: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.400844108s May 4 16:24:41.761: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.404674162s May 4 16:24:43.765: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.408907042s May 4 16:24:45.770: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.413028656s May 4 16:24:47.773: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.41619143s May 4 16:24:49.777: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.420908528s May 4 16:24:51.782: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.425285433s May 4 16:24:53.786: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.429325996s May 4 16:24:55.789: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.432155219s May 4 16:24:57.793: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.436309758s May 4 16:24:59.796: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.439791238s May 4 16:25:01.799: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.442412965s May 4 16:25:03.803: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.44687452s May 4 16:25:05.807: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.450018833s May 4 16:25:07.811: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.454068829s May 4 16:25:09.814: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.457361954s May 4 16:25:11.817: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.460875554s May 4 16:25:13.821: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.46469586s May 4 16:25:15.824: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.467997832s May 4 16:25:17.827: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.470469957s May 4 16:25:19.831: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.474166645s May 4 16:25:21.834: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.477942873s May 4 16:25:23.837: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.480790506s May 4 16:25:25.841: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.4845894s May 4 16:25:27.845: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.488018609s May 4 16:25:29.848: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.491318177s May 4 16:25:31.851: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.494271653s May 4 16:25:33.856: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.499246378s May 4 16:25:35.860: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.503036032s May 4 16:25:37.864: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.507444248s May 4 16:25:39.867: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.510334286s May 4 16:25:41.871: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.514935217s May 4 16:25:43.876: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.51999705s May 4 16:25:45.879: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.522948483s May 4 16:25:47.882: INFO: Pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.525991384s May 4 16:25:49.891: INFO: Failed to get logs from node "node1" pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49" container "env3cont": the server rejected our request for an unknown reason (get pods client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49) STEP: delete the pod May 4 16:25:49.897: INFO: Waiting for pod client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49 to disappear May 4 16:25:49.899: INFO: Pod client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49 still exists May 4 16:25:51.902: INFO: Waiting for pod client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49 to disappear May 4 16:25:51.907: INFO: Pod client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49 still exists May 4 16:25:53.901: INFO: Waiting for pod client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49 to disappear May 4 16:25:53.904: INFO: Pod client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49 no longer exists May 4 16:25:53.904: INFO: (Attempt 3 of 3) Unexpected error occurred: expected pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49" success: Gave up after waiting 5m0s for pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49" to be "Succeeded or Failed" goroutine 217 [running]: runtime/debug.Stack(0x4, 0x4be5f6e, 0x2) /usr/local/go/src/runtime/debug/stack.go:24 +0x9f runtime/debug.PrintStack() /usr/local/go/src/runtime/debug/stack.go:16 +0x25 k8s.io/kubernetes/test/e2e/common.expectNoErrorWithRetries(0xc001a67108, 0x3, 0xc004e11900, 0x1, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:173 +0x2bc k8s.io/kubernetes/test/e2e/common.glob..func18.6() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:527 +0xaf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc000c1ac00, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc000c1ac00, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc0007bf400, 0x52e17e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc0040baff0, 0x0, 0x52e17e0, 0xc000190900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:215 +0x72f k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc0040baff0, 0x52e17e0, 0xc000190900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc00169e000, 0xc0040baff0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc00169e000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc00169e000, 0xc002f5c030) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000194280, 0x7efd15c06970, 0xc001827080, 0x4c22012, 0x14, 0xc0039a57a0, 0x3, 0x3, 0x5396840, 0xc000190900, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x52e6440, 0xc001827080, 0x4c22012, 0x14, 0xc000eb0240, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x52e6440, 0xc001827080, 0x4c22012, 0x14, 0xc002452ca0, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001827080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc001827080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc001827080, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 May 4 16:25:53.905: FAIL: Container should have service environment variables set Unexpected error: <*errors.errorString | 0xc0028b8490>: { s: "expected pod \"client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49\" success: Gave up after waiting 5m0s for pod \"client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49\" to be \"Succeeded or Failed\"", } expected pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49" success: Gave up after waiting 5m0s for pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49" to be "Succeeded or Failed" occurred Full Stack Trace k8s.io/kubernetes/test/e2e/common.glob..func18.6() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:527 +0xaf2 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001827080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc001827080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc001827080, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "pods-2302". STEP: Found 31 events. May 4 16:25:53.910: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49: { } Scheduled: Successfully assigned pods-2302/client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49 to node2 May 4 16:25:53.910: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49: { } Scheduled: Successfully assigned pods-2302/client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49 to node1 May 4 16:25:53.910: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49: { } Scheduled: Successfully assigned pods-2302/client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49 to node1 May 4 16:25:53.910: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for server-envvars-e2e8d4b8-6525-4f40-9a98-8cccf5c227b4: { } Scheduled: Successfully assigned pods-2302/server-envvars-e2e8d4b8-6525-4f40-9a98-8cccf5c227b4 to node1 May 4 16:25:53.910: INFO: At 2021-05-04 16:10:41 +0000 UTC - event for server-envvars-e2e8d4b8-6525-4f40-9a98-8cccf5c227b4: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.20" May 4 16:25:53.910: INFO: At 2021-05-04 16:10:41 +0000 UTC - event for server-envvars-e2e8d4b8-6525-4f40-9a98-8cccf5c227b4: {multus } AddedInterface: Add eth0 [10.244.4.143/24] May 4 16:25:53.910: INFO: At 2021-05-04 16:10:42 +0000 UTC - event for server-envvars-e2e8d4b8-6525-4f40-9a98-8cccf5c227b4: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.20" in 450.386435ms May 4 16:25:53.910: INFO: At 2021-05-04 16:10:42 +0000 UTC - event for server-envvars-e2e8d4b8-6525-4f40-9a98-8cccf5c227b4: {kubelet node1} Created: Created container srv May 4 16:25:53.911: INFO: At 2021-05-04 16:10:42 +0000 UTC - event for server-envvars-e2e8d4b8-6525-4f40-9a98-8cccf5c227b4: {kubelet node1} Started: Started container srv May 4 16:25:53.911: INFO: At 2021-05-04 16:10:45 +0000 UTC - event for client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 4 16:25:53.911: INFO: At 2021-05-04 16:10:45 +0000 UTC - event for client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49: {multus } AddedInterface: Add eth0 [10.244.4.144/24] May 4 16:25:53.911: INFO: At 2021-05-04 16:10:46 +0000 UTC - event for client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49: {kubelet node1} Failed: Error: ErrImagePull May 4 16:25:53.911: INFO: At 2021-05-04 16:10:46 +0000 UTC - event for client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 4 16:25:53.911: INFO: At 2021-05-04 16:10:47 +0000 UTC - event for client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 4 16:25:53.911: INFO: At 2021-05-04 16:10:48 +0000 UTC - event for client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49: {kubelet node1} Failed: Error: ImagePullBackOff May 4 16:25:53.911: INFO: At 2021-05-04 16:10:48 +0000 UTC - event for client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 4 16:25:53.911: INFO: At 2021-05-04 16:10:48 +0000 UTC - event for client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49: {multus } AddedInterface: Add eth0 [10.244.4.145/24] May 4 16:25:53.911: INFO: At 2021-05-04 16:15:48 +0000 UTC - event for client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49: {kubelet node2} Pulling: Pulling image "docker.io/library/busybox:1.29" May 4 16:25:53.911: INFO: At 2021-05-04 16:15:48 +0000 UTC - event for client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49: {multus } AddedInterface: Add eth0 [10.244.3.205/24] May 4 16:25:53.911: INFO: At 2021-05-04 16:15:49 +0000 UTC - event for client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49: {kubelet node2} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 4 16:25:53.911: INFO: At 2021-05-04 16:15:49 +0000 UTC - event for client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49: {kubelet node2} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 4 16:25:53.911: INFO: At 2021-05-04 16:15:49 +0000 UTC - event for client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49: {kubelet node2} Failed: Error: ErrImagePull May 4 16:25:53.911: INFO: At 2021-05-04 16:15:49 +0000 UTC - event for client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49: {kubelet node2} Failed: Error: ImagePullBackOff May 4 16:25:53.911: INFO: At 2021-05-04 16:20:50 +0000 UTC - event for client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49: {multus } AddedInterface: Add eth0 [10.244.4.176/24] May 4 16:25:53.911: INFO: At 2021-05-04 16:20:50 +0000 UTC - event for client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 4 16:25:53.911: INFO: At 2021-05-04 16:20:51 +0000 UTC - event for client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49: {kubelet node1} Failed: Error: ErrImagePull May 4 16:25:53.911: INFO: At 2021-05-04 16:20:51 +0000 UTC - event for client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 4 16:25:53.911: INFO: At 2021-05-04 16:20:52 +0000 UTC - event for client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 4 16:25:53.911: INFO: At 2021-05-04 16:20:54 +0000 UTC - event for client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49: {multus } AddedInterface: Add eth0 [10.244.4.178/24] May 4 16:25:53.911: INFO: At 2021-05-04 16:20:54 +0000 UTC - event for client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 4 16:25:53.911: INFO: At 2021-05-04 16:20:54 +0000 UTC - event for client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49: {kubelet node1} Failed: Error: ImagePullBackOff May 4 16:25:53.913: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:25:53.913: INFO: server-envvars-e2e8d4b8-6525-4f40-9a98-8cccf5c227b4 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:10:40 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:10:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:10:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:10:40 +0000 UTC }] May 4 16:25:53.913: INFO: May 4 16:25:53.917: INFO: Logging node info for node master1 May 4 16:25:53.920: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 db982204-549e-4532-90a7-a4410878cfc9 42260 0 2021-05-04 14:43:01 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"3e:f0:43:cb:66:52"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-04 14:51:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:46 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:25:50 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:25:50 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:25:50 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:25:50 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:88a0771919594d4187f6704fc7592bf8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8e0a253b-2aa4-4467-879e-567e7ba1ffa4,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:25:53.920: INFO: Logging kubelet events for node master1 May 4 16:25:53.922: INFO: Logging pods the kubelet thinks is on node master1 May 4 16:25:53.945: INFO: kube-multus-ds-amd64-jflvf started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:25:53.945: INFO: Container kube-multus ready: true, restart count 1 May 4 16:25:53.945: INFO: coredns-7677f9bb54-qvcd2 started at 2021-05-04 14:46:11 +0000 UTC (0+1 container statuses recorded) May 4 16:25:53.945: INFO: Container coredns ready: true, restart count 1 May 4 16:25:53.945: INFO: node-feature-discovery-controller-5bf5c49849-72rn6 started at 2021-05-04 14:51:52 +0000 UTC (0+1 container statuses recorded) May 4 16:25:53.945: INFO: Container nfd-controller ready: true, restart count 0 May 4 16:25:53.945: INFO: kube-flannel-qspzk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:25:53.945: INFO: Init container install-cni ready: true, restart count 0 May 4 16:25:53.945: INFO: Container kube-flannel ready: true, restart count 3 May 4 16:25:53.945: INFO: kube-proxy-8j6ch started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:25:53.945: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:25:53.945: INFO: docker-registry-docker-registry-56cbc7bc58-zhf8t started at 2021-05-04 14:48:42 +0000 UTC (0+2 container statuses recorded) May 4 16:25:53.945: INFO: Container docker-registry ready: true, restart count 0 May 4 16:25:53.945: INFO: Container nginx ready: true, restart count 0 May 4 16:25:53.945: INFO: node-exporter-jckjs started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:25:53.945: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:25:53.945: INFO: Container node-exporter ready: true, restart count 0 May 4 16:25:53.945: INFO: kube-scheduler-master1 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:25:53.945: INFO: Container kube-scheduler ready: true, restart count 0 May 4 16:25:53.945: INFO: kube-apiserver-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:25:53.945: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:25:53.945: INFO: kube-controller-manager-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:25:53.945: INFO: Container kube-controller-manager ready: true, restart count 2 W0504 16:25:53.957382 38 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:25:53.986: INFO: Latency metrics for node master1 May 4 16:25:53.986: INFO: Logging node info for node master2 May 4 16:25:53.989: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 e2c15170-247b-4e7b-b818-abc807948bf8 42252 0 2021-05-04 14:43:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:e0:10:a0:e0:62"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:32 +0000 UTC,LastTransitionTime:2021-05-04 14:47:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:25:49 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:25:49 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:25:49 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:25:49 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6af568f56589422a9bd68e0270ce0f8c,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:bf27bb77-fad2-4b52-85c3-acb5113fc512,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:25:53.989: INFO: Logging kubelet events for node master2 May 4 16:25:53.991: INFO: Logging pods the kubelet thinks is on node master2 May 4 16:25:53.999: INFO: kube-proxy-6b5t8 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:25:53.999: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:25:53.999: INFO: kube-flannel-cxdfr started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:25:53.999: INFO: Init container install-cni ready: true, restart count 0 May 4 16:25:53.999: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:25:53.999: INFO: kube-multus-ds-amd64-dw8tg started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:25:53.999: INFO: Container kube-multus ready: true, restart count 1 May 4 16:25:53.999: INFO: dns-autoscaler-5b7b5c9b6f-zbrsq started at 2021-05-04 14:46:08 +0000 UTC (0+1 container statuses recorded) May 4 16:25:53.999: INFO: Container autoscaler ready: true, restart count 1 May 4 16:25:53.999: INFO: node-exporter-9c6qf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:25:53.999: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:25:53.999: INFO: Container node-exporter ready: true, restart count 0 May 4 16:25:53.999: INFO: kube-apiserver-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:25:53.999: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:25:53.999: INFO: kube-controller-manager-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:25:53.999: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:25:53.999: INFO: kube-scheduler-master2 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:25:53.999: INFO: Container kube-scheduler ready: true, restart count 2 W0504 16:25:54.011625 38 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:25:54.034: INFO: Latency metrics for node master2 May 4 16:25:54.035: INFO: Logging node info for node master3 May 4 16:25:54.037: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 b533a646-667e-403c-944b-71dec9cc4851 42251 0 2021-05-04 14:43:51 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:b0:53:14:f6:c9"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:48:50 +0000 UTC,LastTransitionTime:2021-05-04 14:48:50 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:25:49 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:25:49 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:25:49 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:25:49 +0000 UTC,LastTransitionTime:2021-05-04 14:45:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:959373dcda56494486f0c2bb0bb496cc,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:0714ca81-c21e-40d6-a288-48d597238e54,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:25:54.037: INFO: Logging kubelet events for node master3 May 4 16:25:54.039: INFO: Logging pods the kubelet thinks is on node master3 May 4 16:25:54.047: INFO: kube-proxy-2p5b6 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:25:54.047: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:25:54.047: INFO: kube-flannel-wznt8 started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:25:54.047: INFO: Init container install-cni ready: true, restart count 0 May 4 16:25:54.047: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:25:54.047: INFO: kube-multus-ds-amd64-cgwz2 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:25:54.047: INFO: Container kube-multus ready: true, restart count 1 May 4 16:25:54.047: INFO: coredns-7677f9bb54-pshfb started at 2021-05-04 14:46:06 +0000 UTC (0+1 container statuses recorded) May 4 16:25:54.047: INFO: Container coredns ready: true, restart count 1 May 4 16:25:54.047: INFO: node-exporter-wvppn started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:25:54.047: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:25:54.047: INFO: Container node-exporter ready: true, restart count 0 May 4 16:25:54.047: INFO: kube-apiserver-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:25:54.047: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:25:54.047: INFO: kube-controller-manager-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:25:54.047: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:25:54.047: INFO: kube-scheduler-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:25:54.047: INFO: Container kube-scheduler ready: true, restart count 2 W0504 16:25:54.060367 38 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:25:54.087: INFO: Latency metrics for node master3 May 4 16:25:54.087: INFO: Logging node info for node node1 May 4 16:25:54.090: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 b8cf7e16-d5c7-4e2c-996a-93d93bd4fa1c 42239 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"e2:50:df:03:d2:13"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:54:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:25:45 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:25:45 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:25:45 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:25:45 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bffc023a4ab84df0b0181bc7b8f509e2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:dc08af72-abca-4f1d-bd0f-0e8d8eb97de5,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[@ :],SizeBytes:1002569035,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:371dc6bf7e0c7ce112a29341b000c40d840aef1dbb4fdcb3ae5c0597e28f3061 golang:alpine3.12],SizeBytes:301097267,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:25:54.091: INFO: Logging kubelet events for node node1 May 4 16:25:54.092: INFO: Logging pods the kubelet thinks is on node node1 May 4 16:25:54.107: INFO: pod-adoption started at 2021-05-04 16:21:10 +0000 UTC (0+1 container statuses recorded) May 4 16:25:54.107: INFO: Container pod-adoption ready: false, restart count 0 May 4 16:25:54.107: INFO: collectd-4755t started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:25:54.107: INFO: Container collectd ready: true, restart count 0 May 4 16:25:54.107: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:25:54.107: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:25:54.107: INFO: fail-once-local-ltx4r started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:25:54.107: INFO: Container c ready: false, restart count 0 May 4 16:25:54.107: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:25:54.107: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:25:54.107: INFO: liveness-http started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:25:54.107: INFO: Container liveness-http ready: false, restart count 19 May 4 16:25:54.107: INFO: server-envvars-e2e8d4b8-6525-4f40-9a98-8cccf5c227b4 started at 2021-05-04 16:10:40 +0000 UTC (0+1 container statuses recorded) May 4 16:25:54.107: INFO: Container srv ready: true, restart count 0 May 4 16:25:54.107: INFO: nginx-proxy-node1 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:25:54.107: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:25:54.107: INFO: kube-proxy-t2mbn started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:25:54.107: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:25:54.107: INFO: cmk-init-discover-node1-m8vvw started at 2021-05-04 14:54:32 +0000 UTC (0+3 container statuses recorded) May 4 16:25:54.107: INFO: Container discover ready: false, restart count 0 May 4 16:25:54.107: INFO: Container init ready: false, restart count 0 May 4 16:25:54.107: INFO: Container install ready: false, restart count 0 May 4 16:25:54.107: INFO: busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01 started at 2021-05-04 16:21:18 +0000 UTC (0+1 container statuses recorded) May 4 16:25:54.107: INFO: Container busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01 ready: false, restart count 0 May 4 16:25:54.107: INFO: kube-multus-ds-amd64-pkmbz started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:25:54.107: INFO: Container kube-multus ready: true, restart count 1 May 4 16:25:54.107: INFO: prometheus-k8s-0 started at 2021-05-04 14:56:12 +0000 UTC (0+5 container statuses recorded) May 4 16:25:54.107: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:25:54.107: INFO: Container grafana ready: true, restart count 0 May 4 16:25:54.107: INFO: Container prometheus ready: true, restart count 1 May 4 16:25:54.107: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:25:54.107: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 4 16:25:54.107: INFO: pod-sharedvolume-86936850-dee1-46bf-8b03-52287eae813c started at 2021-05-04 16:22:19 +0000 UTC (0+2 container statuses recorded) May 4 16:25:54.107: INFO: Container busybox-main-container ready: false, restart count 0 May 4 16:25:54.107: INFO: Container busybox-sub-container ready: false, restart count 0 May 4 16:25:54.108: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:25:54.108: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 16:25:54.108: INFO: cmk-slg76 started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:25:54.108: INFO: Container nodereport ready: true, restart count 0 May 4 16:25:54.108: INFO: Container reconcile ready: true, restart count 0 May 4 16:25:54.108: INFO: prometheus-operator-5bb8cb9d8f-rrrhf started at 2021-05-04 14:56:03 +0000 UTC (0+2 container statuses recorded) May 4 16:25:54.108: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:25:54.108: INFO: Container prometheus-operator ready: true, restart count 0 May 4 16:25:54.108: INFO: node-exporter-k8qd9 started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:25:54.108: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:25:54.108: INFO: Container node-exporter ready: true, restart count 0 May 4 16:25:54.108: INFO: test-webserver-9b461c4f-7d52-4db1-9027-4951689fb2b4 started at 2021-05-04 16:22:21 +0000 UTC (0+1 container statuses recorded) May 4 16:25:54.108: INFO: Container test-webserver ready: true, restart count 0 May 4 16:25:54.108: INFO: kube-flannel-d6pbl started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:25:54.108: INFO: Init container install-cni ready: true, restart count 2 May 4 16:25:54.108: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:25:54.108: INFO: node-feature-discovery-worker-wfgl5 started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:25:54.108: INFO: Container nfd-worker ready: true, restart count 0 W0504 16:25:54.121108 38 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:25:54.169: INFO: Latency metrics for node node1 May 4 16:25:54.169: INFO: Logging node info for node node2 May 4 16:25:54.172: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 41567fa7-bb24-4381-9387-e4115195037d 42269 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:98:f5:3b:98:5c"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:25:53 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:25:53 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:25:53 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:25:53 +0000 UTC,LastTransitionTime:2021-05-04 14:45:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d47a1c7ab17f44f2ae7ff788700a8d74,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:df3523a1-a74f-4f8b-beb1-29f5ed8699f3,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:25:54.172: INFO: Logging kubelet events for node node2 May 4 16:25:54.174: INFO: Logging pods the kubelet thinks is on node node2 May 4 16:25:54.187: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:25:54.187: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 4 16:25:54.187: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:25:54.187: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:25:54.187: INFO: cmk-2fmbx started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:25:54.187: INFO: Container nodereport ready: true, restart count 0 May 4 16:25:54.187: INFO: Container reconcile ready: true, restart count 0 May 4 16:25:54.187: INFO: pod-exec-websocket-2863438c-c2df-4c3c-9cd1-2b53e8002946 started at 2021-05-04 16:22:49 +0000 UTC (0+1 container statuses recorded) May 4 16:25:54.187: INFO: Container main ready: false, restart count 0 May 4 16:25:54.187: INFO: kube-flannel-lnwkk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:25:54.187: INFO: Init container install-cni ready: true, restart count 2 May 4 16:25:54.187: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:25:54.187: INFO: cmk-webhook-6c9d5f8578-fr595 started at 2021-05-04 14:55:15 +0000 UTC (0+1 container statuses recorded) May 4 16:25:54.187: INFO: Container cmk-webhook ready: true, restart count 0 May 4 16:25:54.187: INFO: ss-0 started at 2021-05-04 16:17:34 +0000 UTC (0+1 container statuses recorded) May 4 16:25:54.187: INFO: Container webserver ready: false, restart count 0 May 4 16:25:54.187: INFO: liveness-exec started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:25:54.187: INFO: Container liveness-exec ready: false, restart count 6 May 4 16:25:54.187: INFO: kube-proxy-rfjjf started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:25:54.187: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:25:54.187: INFO: node-feature-discovery-worker-jzjqs started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:25:54.187: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:25:54.187: INFO: node-exporter-5lghf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:25:54.187: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:25:54.187: INFO: Container node-exporter ready: true, restart count 0 May 4 16:25:54.187: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x started at 2021-05-04 14:59:02 +0000 UTC (0+2 container statuses recorded) May 4 16:25:54.187: INFO: Container tas-controller ready: true, restart count 0 May 4 16:25:54.187: INFO: Container tas-extender ready: true, restart count 0 May 4 16:25:54.187: INFO: foo-9dkvq started at 2021-05-04 16:25:24 +0000 UTC (0+1 container statuses recorded) May 4 16:25:54.188: INFO: Container c ready: false, restart count 0 May 4 16:25:54.188: INFO: kube-multus-ds-amd64-7r2s4 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:25:54.188: INFO: Container kube-multus ready: true, restart count 1 May 4 16:25:54.188: INFO: cmk-init-discover-node2-zlxzj started at 2021-05-04 14:54:52 +0000 UTC (0+3 container statuses recorded) May 4 16:25:54.188: INFO: Container discover ready: false, restart count 0 May 4 16:25:54.188: INFO: Container init ready: false, restart count 0 May 4 16:25:54.188: INFO: Container install ready: false, restart count 0 May 4 16:25:54.188: INFO: collectd-dhwfp started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:25:54.188: INFO: Container collectd ready: true, restart count 0 May 4 16:25:54.188: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:25:54.188: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:25:54.188: INFO: fail-once-local-bkr6m started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:25:54.188: INFO: Container c ready: false, restart count 0 May 4 16:25:54.188: INFO: termination-message-containere34b0019-bcb9-4fa0-9c1b-5eb0017d80c4 started at 2021-05-04 16:22:03 +0000 UTC (0+1 container statuses recorded) May 4 16:25:54.188: INFO: Container termination-message-container ready: false, restart count 0 May 4 16:25:54.188: INFO: foo-sxtvr started at 2021-05-04 16:25:24 +0000 UTC (0+1 container statuses recorded) May 4 16:25:54.188: INFO: Container c ready: false, restart count 0 May 4 16:25:54.188: INFO: nginx-proxy-node2 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:25:54.188: INFO: Container nginx-proxy ready: true, restart count 2 W0504 16:25:54.199290 38 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:25:54.247: INFO: Latency metrics for node node2 May 4 16:25:54.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2302" for this suite. • Failure [914.127 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:25:53.905: Container should have service environment variables set Unexpected error: <*errors.errorString | 0xc0028b8490>: { s: "expected pod \"client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49\" success: Gave up after waiting 5m0s for pod \"client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49\" to be \"Succeeded or Failed\"", } expected pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49" success: Gave up after waiting 5m0s for pod "client-envvars-3aca01f8-0d22-4951-b441-fa131ddecb49" to be "Succeeded or Failed" occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:527 ------------------------------ {"msg":"FAILED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":613,"failed":1,"failures":["[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:21:10.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created May 4 16:26:10.479: FAIL: Unexpected error: <*errors.errorString | 0xc0002821f0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*PodClient).CreateSync(0xc003651d20, 0xc003793c00, 0x4be824b) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:103 +0xfe k8s.io/kubernetes/test/e2e/apps.testRCAdoptMatchingOrphans(0xc00004f8c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:561 +0x233 k8s.io/kubernetes/test/e2e/apps.glob..func8.5() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:90 +0x2a k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0015fcd80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc0015fcd80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc0015fcd80, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "replication-controller-4903". STEP: Found 7 events. May 4 16:26:10.484: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-adoption: { } Scheduled: Successfully assigned replication-controller-4903/pod-adoption to node1 May 4 16:26:10.484: INFO: At 2021-05-04 16:21:11 +0000 UTC - event for pod-adoption: {multus } AddedInterface: Add eth0 [10.244.4.183/24] May 4 16:26:10.484: INFO: At 2021-05-04 16:21:11 +0000 UTC - event for pod-adoption: {kubelet node1} Pulling: Pulling image "docker.io/library/httpd:2.4.38-alpine" May 4 16:26:10.484: INFO: At 2021-05-04 16:21:12 +0000 UTC - event for pod-adoption: {kubelet node1} Failed: Failed to pull image "docker.io/library/httpd:2.4.38-alpine": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 4 16:26:10.484: INFO: At 2021-05-04 16:21:12 +0000 UTC - event for pod-adoption: {kubelet node1} Failed: Error: ErrImagePull May 4 16:26:10.484: INFO: At 2021-05-04 16:21:13 +0000 UTC - event for pod-adoption: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/httpd:2.4.38-alpine" May 4 16:26:10.484: INFO: At 2021-05-04 16:21:13 +0000 UTC - event for pod-adoption: {kubelet node1} Failed: Error: ImagePullBackOff May 4 16:26:10.486: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:26:10.486: INFO: pod-adoption node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:21:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:21:10 +0000 UTC ContainersNotReady containers with unready status: [pod-adoption]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:21:10 +0000 UTC ContainersNotReady containers with unready status: [pod-adoption]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:21:10 +0000 UTC }] May 4 16:26:10.486: INFO: May 4 16:26:10.491: INFO: Logging node info for node master1 May 4 16:26:10.493: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 db982204-549e-4532-90a7-a4410878cfc9 42368 0 2021-05-04 14:43:01 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"3e:f0:43:cb:66:52"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-04 14:51:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:46 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:26:00 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:26:00 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:26:00 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:26:00 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:88a0771919594d4187f6704fc7592bf8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8e0a253b-2aa4-4467-879e-567e7ba1ffa4,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:26:10.494: INFO: Logging kubelet events for node master1 May 4 16:26:10.496: INFO: Logging pods the kubelet thinks is on node master1 May 4 16:26:10.505: INFO: kube-controller-manager-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.505: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:26:10.505: INFO: kube-proxy-8j6ch started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.505: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:26:10.505: INFO: docker-registry-docker-registry-56cbc7bc58-zhf8t started at 2021-05-04 14:48:42 +0000 UTC (0+2 container statuses recorded) May 4 16:26:10.505: INFO: Container docker-registry ready: true, restart count 0 May 4 16:26:10.505: INFO: Container nginx ready: true, restart count 0 May 4 16:26:10.505: INFO: node-exporter-jckjs started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:26:10.505: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:26:10.505: INFO: Container node-exporter ready: true, restart count 0 May 4 16:26:10.505: INFO: kube-scheduler-master1 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.505: INFO: Container kube-scheduler ready: true, restart count 0 May 4 16:26:10.505: INFO: kube-apiserver-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.505: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:26:10.505: INFO: kube-flannel-qspzk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:26:10.505: INFO: Init container install-cni ready: true, restart count 0 May 4 16:26:10.505: INFO: Container kube-flannel ready: true, restart count 3 May 4 16:26:10.505: INFO: kube-multus-ds-amd64-jflvf started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.505: INFO: Container kube-multus ready: true, restart count 1 May 4 16:26:10.505: INFO: coredns-7677f9bb54-qvcd2 started at 2021-05-04 14:46:11 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.505: INFO: Container coredns ready: true, restart count 1 May 4 16:26:10.505: INFO: node-feature-discovery-controller-5bf5c49849-72rn6 started at 2021-05-04 14:51:52 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.505: INFO: Container nfd-controller ready: true, restart count 0 W0504 16:26:10.519226 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:26:10.551: INFO: Latency metrics for node master1 May 4 16:26:10.551: INFO: Logging node info for node master2 May 4 16:26:10.553: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 e2c15170-247b-4e7b-b818-abc807948bf8 42412 0 2021-05-04 14:43:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:e0:10:a0:e0:62"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:32 +0000 UTC,LastTransitionTime:2021-05-04 14:47:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:26:09 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:26:09 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:26:09 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:26:09 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6af568f56589422a9bd68e0270ce0f8c,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:bf27bb77-fad2-4b52-85c3-acb5113fc512,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:26:10.554: INFO: Logging kubelet events for node master2 May 4 16:26:10.556: INFO: Logging pods the kubelet thinks is on node master2 May 4 16:26:10.562: INFO: dns-autoscaler-5b7b5c9b6f-zbrsq started at 2021-05-04 14:46:08 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.562: INFO: Container autoscaler ready: true, restart count 1 May 4 16:26:10.562: INFO: node-exporter-9c6qf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:26:10.562: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:26:10.562: INFO: Container node-exporter ready: true, restart count 0 May 4 16:26:10.562: INFO: kube-apiserver-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.562: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:26:10.562: INFO: kube-controller-manager-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.562: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:26:10.562: INFO: kube-scheduler-master2 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.562: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:26:10.562: INFO: kube-proxy-6b5t8 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.562: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:26:10.562: INFO: kube-flannel-cxdfr started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:26:10.562: INFO: Init container install-cni ready: true, restart count 0 May 4 16:26:10.562: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:26:10.562: INFO: kube-multus-ds-amd64-dw8tg started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.562: INFO: Container kube-multus ready: true, restart count 1 W0504 16:26:10.573666 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:26:10.595: INFO: Latency metrics for node master2 May 4 16:26:10.596: INFO: Logging node info for node master3 May 4 16:26:10.598: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 b533a646-667e-403c-944b-71dec9cc4851 42410 0 2021-05-04 14:43:51 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:b0:53:14:f6:c9"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:48:50 +0000 UTC,LastTransitionTime:2021-05-04 14:48:50 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:26:09 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:26:09 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:26:09 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:26:09 +0000 UTC,LastTransitionTime:2021-05-04 14:45:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:959373dcda56494486f0c2bb0bb496cc,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:0714ca81-c21e-40d6-a288-48d597238e54,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:26:10.598: INFO: Logging kubelet events for node master3 May 4 16:26:10.600: INFO: Logging pods the kubelet thinks is on node master3 May 4 16:26:10.608: INFO: kube-proxy-2p5b6 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.608: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:26:10.608: INFO: kube-flannel-wznt8 started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:26:10.608: INFO: Init container install-cni ready: true, restart count 0 May 4 16:26:10.608: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:26:10.608: INFO: kube-multus-ds-amd64-cgwz2 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.608: INFO: Container kube-multus ready: true, restart count 1 May 4 16:26:10.608: INFO: coredns-7677f9bb54-pshfb started at 2021-05-04 14:46:06 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.608: INFO: Container coredns ready: true, restart count 1 May 4 16:26:10.608: INFO: node-exporter-wvppn started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:26:10.608: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:26:10.608: INFO: Container node-exporter ready: true, restart count 0 May 4 16:26:10.608: INFO: kube-apiserver-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.608: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:26:10.608: INFO: kube-controller-manager-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.608: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:26:10.608: INFO: kube-scheduler-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.608: INFO: Container kube-scheduler ready: true, restart count 2 W0504 16:26:10.621518 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:26:10.649: INFO: Latency metrics for node master3 May 4 16:26:10.649: INFO: Logging node info for node node1 May 4 16:26:10.651: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 b8cf7e16-d5c7-4e2c-996a-93d93bd4fa1c 42389 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"e2:50:df:03:d2:13"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:54:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:26:05 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:26:05 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:26:05 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:26:05 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bffc023a4ab84df0b0181bc7b8f509e2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:dc08af72-abca-4f1d-bd0f-0e8d8eb97de5,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[@ :],SizeBytes:1002569035,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:371dc6bf7e0c7ce112a29341b000c40d840aef1dbb4fdcb3ae5c0597e28f3061 golang:alpine3.12],SizeBytes:301097267,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:26:10.652: INFO: Logging kubelet events for node node1 May 4 16:26:10.654: INFO: Logging pods the kubelet thinks is on node node1 May 4 16:26:10.670: INFO: kube-flannel-d6pbl started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:26:10.670: INFO: Init container install-cni ready: true, restart count 2 May 4 16:26:10.670: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:26:10.670: INFO: node-feature-discovery-worker-wfgl5 started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.670: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:26:10.670: INFO: prometheus-operator-5bb8cb9d8f-rrrhf started at 2021-05-04 14:56:03 +0000 UTC (0+2 container statuses recorded) May 4 16:26:10.670: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:26:10.670: INFO: Container prometheus-operator ready: true, restart count 0 May 4 16:26:10.670: INFO: node-exporter-k8qd9 started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:26:10.670: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:26:10.670: INFO: Container node-exporter ready: true, restart count 0 May 4 16:26:10.670: INFO: test-webserver-9b461c4f-7d52-4db1-9027-4951689fb2b4 started at 2021-05-04 16:22:21 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.670: INFO: Container test-webserver ready: true, restart count 0 May 4 16:26:10.670: INFO: collectd-4755t started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:26:10.670: INFO: Container collectd ready: true, restart count 0 May 4 16:26:10.670: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:26:10.670: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:26:10.670: INFO: fail-once-local-ltx4r started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.670: INFO: Container c ready: false, restart count 0 May 4 16:26:10.670: INFO: pod-adoption started at 2021-05-04 16:21:10 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.670: INFO: Container pod-adoption ready: false, restart count 0 May 4 16:26:10.670: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.670: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:26:10.670: INFO: nginx-proxy-node1 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.670: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:26:10.670: INFO: kube-proxy-t2mbn started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.670: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:26:10.670: INFO: liveness-http started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.670: INFO: Container liveness-http ready: false, restart count 19 May 4 16:26:10.670: INFO: cmk-init-discover-node1-m8vvw started at 2021-05-04 14:54:32 +0000 UTC (0+3 container statuses recorded) May 4 16:26:10.670: INFO: Container discover ready: false, restart count 0 May 4 16:26:10.670: INFO: Container init ready: false, restart count 0 May 4 16:26:10.670: INFO: Container install ready: false, restart count 0 May 4 16:26:10.670: INFO: busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01 started at 2021-05-04 16:21:18 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.670: INFO: Container busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01 ready: false, restart count 0 May 4 16:26:10.670: INFO: kube-multus-ds-amd64-pkmbz started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.670: INFO: Container kube-multus ready: true, restart count 1 May 4 16:26:10.670: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.670: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 16:26:10.670: INFO: cmk-slg76 started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:26:10.670: INFO: Container nodereport ready: true, restart count 0 May 4 16:26:10.670: INFO: Container reconcile ready: true, restart count 0 May 4 16:26:10.670: INFO: prometheus-k8s-0 started at 2021-05-04 14:56:12 +0000 UTC (0+5 container statuses recorded) May 4 16:26:10.670: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:26:10.670: INFO: Container grafana ready: true, restart count 0 May 4 16:26:10.670: INFO: Container prometheus ready: true, restart count 1 May 4 16:26:10.670: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:26:10.670: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 4 16:26:10.670: INFO: pod-sharedvolume-86936850-dee1-46bf-8b03-52287eae813c started at 2021-05-04 16:22:19 +0000 UTC (0+2 container statuses recorded) May 4 16:26:10.670: INFO: Container busybox-main-container ready: false, restart count 0 May 4 16:26:10.670: INFO: Container busybox-sub-container ready: false, restart count 0 W0504 16:26:10.683467 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:26:10.712: INFO: Latency metrics for node node1 May 4 16:26:10.712: INFO: Logging node info for node node2 May 4 16:26:10.716: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 41567fa7-bb24-4381-9387-e4115195037d 42380 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:98:f5:3b:98:5c"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:26:03 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:26:03 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:26:03 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:26:03 +0000 UTC,LastTransitionTime:2021-05-04 14:45:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d47a1c7ab17f44f2ae7ff788700a8d74,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:df3523a1-a74f-4f8b-beb1-29f5ed8699f3,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:26:10.716: INFO: Logging kubelet events for node node2 May 4 16:26:10.718: INFO: Logging pods the kubelet thinks is on node node2 May 4 16:26:10.731: INFO: termination-message-containere34b0019-bcb9-4fa0-9c1b-5eb0017d80c4 started at 2021-05-04 16:22:03 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.731: INFO: Container termination-message-container ready: false, restart count 0 May 4 16:26:10.731: INFO: cmk-init-discover-node2-zlxzj started at 2021-05-04 14:54:52 +0000 UTC (0+3 container statuses recorded) May 4 16:26:10.731: INFO: Container discover ready: false, restart count 0 May 4 16:26:10.731: INFO: Container init ready: false, restart count 0 May 4 16:26:10.731: INFO: Container install ready: false, restart count 0 May 4 16:26:10.731: INFO: collectd-dhwfp started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:26:10.731: INFO: Container collectd ready: true, restart count 0 May 4 16:26:10.731: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:26:10.731: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:26:10.731: INFO: fail-once-local-bkr6m started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.731: INFO: Container c ready: false, restart count 0 May 4 16:26:10.731: INFO: foo-sxtvr started at 2021-05-04 16:25:24 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.731: INFO: Container c ready: false, restart count 0 May 4 16:26:10.731: INFO: nginx-proxy-node2 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.731: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:26:10.731: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.731: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 4 16:26:10.732: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.732: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:26:10.732: INFO: cmk-2fmbx started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:26:10.732: INFO: Container nodereport ready: true, restart count 0 May 4 16:26:10.732: INFO: Container reconcile ready: true, restart count 0 May 4 16:26:10.732: INFO: pod-exec-websocket-2863438c-c2df-4c3c-9cd1-2b53e8002946 started at 2021-05-04 16:22:49 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.732: INFO: Container main ready: false, restart count 0 May 4 16:26:10.732: INFO: kube-flannel-lnwkk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:26:10.732: INFO: Init container install-cni ready: true, restart count 2 May 4 16:26:10.732: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:26:10.732: INFO: cmk-webhook-6c9d5f8578-fr595 started at 2021-05-04 14:55:15 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.732: INFO: Container cmk-webhook ready: true, restart count 0 May 4 16:26:10.732: INFO: ss-0 started at 2021-05-04 16:17:34 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.732: INFO: Container webserver ready: false, restart count 0 May 4 16:26:10.732: INFO: var-expansion-792ff743-a8c5-4f3a-94b8-4968bd4cf720 started at 2021-05-04 16:25:54 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.732: INFO: Container dapi-container ready: false, restart count 0 May 4 16:26:10.732: INFO: foo-9dkvq started at 2021-05-04 16:25:24 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.732: INFO: Container c ready: false, restart count 0 May 4 16:26:10.732: INFO: liveness-exec started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.732: INFO: Container liveness-exec ready: false, restart count 6 May 4 16:26:10.732: INFO: kube-proxy-rfjjf started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.732: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:26:10.732: INFO: node-feature-discovery-worker-jzjqs started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.732: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:26:10.732: INFO: node-exporter-5lghf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:26:10.732: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:26:10.732: INFO: Container node-exporter ready: true, restart count 0 May 4 16:26:10.732: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x started at 2021-05-04 14:59:02 +0000 UTC (0+2 container statuses recorded) May 4 16:26:10.732: INFO: Container tas-controller ready: true, restart count 0 May 4 16:26:10.732: INFO: Container tas-extender ready: true, restart count 0 May 4 16:26:10.732: INFO: kube-multus-ds-amd64-7r2s4 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:26:10.732: INFO: Container kube-multus ready: true, restart count 1 W0504 16:26:10.743988 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:26:10.787: INFO: Latency metrics for node node2 May 4 16:26:10.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4903" for this suite. • Failure [300.364 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:26:10.479: Unexpected error: <*errors.errorString | 0xc0002821f0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:103 ------------------------------ {"msg":"FAILED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":29,"skipped":405,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","[sig-apps] ReplicationController should adopt matching pods on creation [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:21:18.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:21:18.377: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01" in namespace "security-context-test-3814" to be "Succeeded or Failed" May 4 16:21:18.380: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.727443ms May 4 16:21:20.385: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007181172s May 4 16:21:22.387: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009752664s May 4 16:21:24.391: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013552972s May 4 16:21:26.394: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016818319s May 4 16:21:28.397: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 10.020092023s May 4 16:21:30.401: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 12.023155704s May 4 16:21:32.404: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 14.026852368s May 4 16:21:34.408: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 16.030242929s May 4 16:21:36.411: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 18.033444515s May 4 16:21:38.414: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 20.03632411s May 4 16:21:40.416: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 22.039063863s May 4 16:21:42.420: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 24.042471367s May 4 16:21:44.422: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 26.045017514s May 4 16:21:46.425: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 28.047887457s May 4 16:21:48.429: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 30.051318226s May 4 16:21:50.433: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 32.055269209s May 4 16:21:52.436: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 34.058594783s May 4 16:21:54.442: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 36.064740555s May 4 16:21:56.446: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 38.068806971s May 4 16:21:58.449: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 40.071569261s May 4 16:22:00.454: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 42.076752603s May 4 16:22:02.458: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 44.080792245s May 4 16:22:04.462: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 46.084699s May 4 16:22:06.466: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 48.088296246s May 4 16:22:08.471: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 50.093368256s May 4 16:22:10.476: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 52.098138663s May 4 16:22:12.479: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 54.101650613s May 4 16:22:14.483: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 56.105472774s May 4 16:22:16.487: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 58.10954663s May 4 16:22:18.491: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.113761675s May 4 16:22:20.497: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.119171516s May 4 16:22:22.500: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.122554734s May 4 16:22:24.503: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.125267066s May 4 16:22:26.506: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.128723922s May 4 16:22:28.510: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.132258054s May 4 16:22:30.514: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.136888527s May 4 16:22:32.518: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.140446549s May 4 16:22:34.522: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.144750251s May 4 16:22:36.526: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.148402395s May 4 16:22:38.529: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.15122231s May 4 16:22:40.533: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.155835753s May 4 16:22:42.537: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.159964949s May 4 16:22:44.541: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.16330451s May 4 16:22:46.545: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.167282038s May 4 16:22:48.548: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.170656023s May 4 16:22:50.552: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.174852282s May 4 16:22:52.556: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.178503819s May 4 16:22:54.559: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.181772286s May 4 16:22:56.563: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.185312259s May 4 16:22:58.567: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.189665586s May 4 16:23:00.577: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.200079894s May 4 16:23:02.582: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.204909357s May 4 16:23:04.590: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.212442455s May 4 16:23:06.594: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.216939968s May 4 16:23:08.601: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.223317718s May 4 16:23:10.607: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.229774714s May 4 16:23:12.611: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.233877033s May 4 16:23:14.617: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.2400751s May 4 16:23:16.622: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.244569476s May 4 16:23:18.628: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.250490438s May 4 16:23:20.635: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.257508891s May 4 16:23:22.639: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.261336477s May 4 16:23:24.642: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.264806896s May 4 16:23:26.648: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.27068587s May 4 16:23:28.655: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.277481493s May 4 16:23:30.659: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.281873582s May 4 16:23:32.663: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.285166354s May 4 16:23:34.668: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.290422811s May 4 16:23:36.673: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.295962195s May 4 16:23:38.678: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.300266523s May 4 16:23:40.681: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.30348076s May 4 16:23:42.684: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.30648231s May 4 16:23:44.687: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.309956049s May 4 16:23:46.691: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.313908052s May 4 16:23:48.696: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.318549572s May 4 16:23:50.700: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.322254989s May 4 16:23:52.703: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.325262384s May 4 16:23:54.705: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.327970639s May 4 16:23:56.710: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.332139856s May 4 16:23:58.713: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.335451666s May 4 16:24:00.716: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.338217632s May 4 16:24:02.718: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.340915223s May 4 16:24:04.721: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.344058558s May 4 16:24:06.727: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.349437635s May 4 16:24:08.730: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.352480794s May 4 16:24:10.733: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.355980562s May 4 16:24:12.737: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.359392175s May 4 16:24:14.740: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.362932343s May 4 16:24:16.743: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.365961627s May 4 16:24:18.746: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.368968793s May 4 16:24:20.751: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.373696168s May 4 16:24:22.754: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.377075774s May 4 16:24:24.758: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.381020565s May 4 16:24:26.761: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.383900863s May 4 16:24:28.765: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.387858056s May 4 16:24:30.770: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.392110622s May 4 16:24:32.774: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.396264181s May 4 16:24:34.779: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.401613759s May 4 16:24:36.784: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.406338573s May 4 16:24:38.789: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.411499056s May 4 16:24:40.794: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.416185613s May 4 16:24:42.797: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.419450205s May 4 16:24:44.802: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.424216878s May 4 16:24:46.804: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.426793748s May 4 16:24:48.809: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.431823051s May 4 16:24:50.812: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.43447267s May 4 16:24:52.815: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.437991963s May 4 16:24:54.821: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.443235794s May 4 16:24:56.825: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.447811536s May 4 16:24:58.831: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.453107023s May 4 16:25:00.838: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.460308889s May 4 16:25:02.841: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.46346641s May 4 16:25:04.844: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.466801896s May 4 16:25:06.847: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.469724356s May 4 16:25:08.850: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.472186224s May 4 16:25:10.852: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.474540776s May 4 16:25:12.855: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.477319598s May 4 16:25:14.858: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.480543519s May 4 16:25:16.861: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.484076034s May 4 16:25:18.869: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.491396944s May 4 16:25:20.872: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.494413685s May 4 16:25:22.876: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.498956773s May 4 16:25:24.884: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.506723133s May 4 16:25:26.887: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.510060688s May 4 16:25:28.891: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.513902432s May 4 16:25:30.895: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.517453615s May 4 16:25:32.900: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.522518702s May 4 16:25:34.906: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.529075968s May 4 16:25:36.910: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.532252583s May 4 16:25:38.916: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.53814898s May 4 16:25:40.919: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.541352515s May 4 16:25:42.921: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.543940327s May 4 16:25:44.927: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.549133443s May 4 16:25:46.931: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.554004807s May 4 16:25:48.936: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.558181027s May 4 16:25:50.939: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.561161858s May 4 16:25:52.943: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.565447392s May 4 16:25:54.946: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.568738102s May 4 16:25:56.949: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.571261931s May 4 16:25:58.953: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.575421193s May 4 16:26:00.955: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.578065482s May 4 16:26:02.959: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.581981199s May 4 16:26:04.964: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.586521729s May 4 16:26:06.968: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.59052051s May 4 16:26:08.974: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.596220174s May 4 16:26:10.977: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.59969035s May 4 16:26:12.980: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.602423074s May 4 16:26:14.983: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.605989276s May 4 16:26:16.988: INFO: Pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.610593942s May 4 16:26:18.990: FAIL: wait for pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01" to succeed Expected success, but got an error: <*errors.errorString | 0xc00582a7e0>: { s: "Gave up after waiting 5m0s for pod \"busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01\" to be \"Succeeded or Failed\"", } Gave up after waiting 5m0s for pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01" to be "Succeeded or Failed" Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*PodClient).WaitForSuccess(0xc003d495e0, 0xc002588f00, 0x3b, 0x45d964b800) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:212 +0x2bb k8s.io/kubernetes/test/e2e/common.glob..func29.4.2(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:198 +0x265 k8s.io/kubernetes/test/e2e/common.glob..func29.4.4() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:223 +0x2a k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000703c80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc000703c80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc000703c80, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "security-context-test-3814". STEP: Found 10 events. May 4 16:26:18.996: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01: { } Scheduled: Successfully assigned security-context-test-3814/busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01 to node1 May 4 16:26:18.996: INFO: At 2021-05-04 16:21:19 +0000 UTC - event for busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01: {multus } AddedInterface: Add eth0 [10.244.4.185/24] May 4 16:26:18.996: INFO: At 2021-05-04 16:21:19 +0000 UTC - event for busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 4 16:26:18.996: INFO: At 2021-05-04 16:21:21 +0000 UTC - event for busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 4 16:26:18.996: INFO: At 2021-05-04 16:21:21 +0000 UTC - event for busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01: {kubelet node1} Failed: Error: ErrImagePull May 4 16:26:18.996: INFO: At 2021-05-04 16:21:22 +0000 UTC - event for busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 4 16:26:18.996: INFO: At 2021-05-04 16:21:25 +0000 UTC - event for busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01: {multus } AddedInterface: Add eth0 [10.244.4.188/24] May 4 16:26:18.996: INFO: At 2021-05-04 16:21:25 +0000 UTC - event for busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 4 16:26:18.996: INFO: At 2021-05-04 16:21:25 +0000 UTC - event for busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01: {kubelet node1} Failed: Error: ImagePullBackOff May 4 16:26:18.996: INFO: At 2021-05-04 16:21:30 +0000 UTC - event for busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01: {multus } AddedInterface: Add eth0 [10.244.4.191/24] May 4 16:26:18.999: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:26:18.999: INFO: busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:21:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:21:18 +0000 UTC ContainersNotReady containers with unready status: [busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:21:18 +0000 UTC ContainersNotReady containers with unready status: [busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:21:18 +0000 UTC }] May 4 16:26:18.999: INFO: May 4 16:26:19.003: INFO: Logging node info for node master1 May 4 16:26:19.007: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 db982204-549e-4532-90a7-a4410878cfc9 42419 0 2021-05-04 14:43:01 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"3e:f0:43:cb:66:52"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-04 14:51:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:46 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:26:10 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:26:10 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:26:10 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:26:10 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:88a0771919594d4187f6704fc7592bf8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8e0a253b-2aa4-4467-879e-567e7ba1ffa4,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:26:19.008: INFO: Logging kubelet events for node master1 May 4 16:26:19.010: INFO: Logging pods the kubelet thinks is on node master1 May 4 16:26:19.021: INFO: kube-proxy-8j6ch started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.021: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:26:19.021: INFO: docker-registry-docker-registry-56cbc7bc58-zhf8t started at 2021-05-04 14:48:42 +0000 UTC (0+2 container statuses recorded) May 4 16:26:19.021: INFO: Container docker-registry ready: true, restart count 0 May 4 16:26:19.021: INFO: Container nginx ready: true, restart count 0 May 4 16:26:19.021: INFO: node-exporter-jckjs started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:26:19.021: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:26:19.021: INFO: Container node-exporter ready: true, restart count 0 May 4 16:26:19.021: INFO: kube-scheduler-master1 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.021: INFO: Container kube-scheduler ready: true, restart count 0 May 4 16:26:19.021: INFO: kube-apiserver-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.021: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:26:19.021: INFO: kube-controller-manager-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.021: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:26:19.021: INFO: kube-multus-ds-amd64-jflvf started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.021: INFO: Container kube-multus ready: true, restart count 1 May 4 16:26:19.021: INFO: coredns-7677f9bb54-qvcd2 started at 2021-05-04 14:46:11 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.021: INFO: Container coredns ready: true, restart count 1 May 4 16:26:19.021: INFO: node-feature-discovery-controller-5bf5c49849-72rn6 started at 2021-05-04 14:51:52 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.021: INFO: Container nfd-controller ready: true, restart count 0 May 4 16:26:19.021: INFO: kube-flannel-qspzk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:26:19.021: INFO: Init container install-cni ready: true, restart count 0 May 4 16:26:19.021: INFO: Container kube-flannel ready: true, restart count 3 W0504 16:26:19.037045 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:26:19.064: INFO: Latency metrics for node master1 May 4 16:26:19.064: INFO: Logging node info for node master2 May 4 16:26:19.066: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 e2c15170-247b-4e7b-b818-abc807948bf8 42412 0 2021-05-04 14:43:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:e0:10:a0:e0:62"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:32 +0000 UTC,LastTransitionTime:2021-05-04 14:47:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:26:09 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:26:09 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:26:09 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:26:09 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6af568f56589422a9bd68e0270ce0f8c,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:bf27bb77-fad2-4b52-85c3-acb5113fc512,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:26:19.067: INFO: Logging kubelet events for node master2 May 4 16:26:19.069: INFO: Logging pods the kubelet thinks is on node master2 May 4 16:26:19.076: INFO: node-exporter-9c6qf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:26:19.076: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:26:19.076: INFO: Container node-exporter ready: true, restart count 0 May 4 16:26:19.076: INFO: kube-apiserver-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.076: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:26:19.076: INFO: kube-controller-manager-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.076: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:26:19.076: INFO: kube-scheduler-master2 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.077: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:26:19.077: INFO: kube-proxy-6b5t8 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.077: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:26:19.077: INFO: kube-flannel-cxdfr started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:26:19.077: INFO: Init container install-cni ready: true, restart count 0 May 4 16:26:19.077: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:26:19.077: INFO: kube-multus-ds-amd64-dw8tg started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.077: INFO: Container kube-multus ready: true, restart count 1 May 4 16:26:19.077: INFO: dns-autoscaler-5b7b5c9b6f-zbrsq started at 2021-05-04 14:46:08 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.077: INFO: Container autoscaler ready: true, restart count 1 W0504 16:26:19.091490 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:26:19.123: INFO: Latency metrics for node master2 May 4 16:26:19.123: INFO: Logging node info for node master3 May 4 16:26:19.126: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 b533a646-667e-403c-944b-71dec9cc4851 42410 0 2021-05-04 14:43:51 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:b0:53:14:f6:c9"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:48:50 +0000 UTC,LastTransitionTime:2021-05-04 14:48:50 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:26:09 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:26:09 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:26:09 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:26:09 +0000 UTC,LastTransitionTime:2021-05-04 14:45:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:959373dcda56494486f0c2bb0bb496cc,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:0714ca81-c21e-40d6-a288-48d597238e54,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:26:19.126: INFO: Logging kubelet events for node master3 May 4 16:26:19.128: INFO: Logging pods the kubelet thinks is on node master3 May 4 16:26:19.138: INFO: kube-flannel-wznt8 started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:26:19.138: INFO: Init container install-cni ready: true, restart count 0 May 4 16:26:19.138: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:26:19.138: INFO: kube-multus-ds-amd64-cgwz2 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.138: INFO: Container kube-multus ready: true, restart count 1 May 4 16:26:19.138: INFO: coredns-7677f9bb54-pshfb started at 2021-05-04 14:46:06 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.138: INFO: Container coredns ready: true, restart count 1 May 4 16:26:19.138: INFO: node-exporter-wvppn started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:26:19.138: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:26:19.138: INFO: Container node-exporter ready: true, restart count 0 May 4 16:26:19.138: INFO: kube-apiserver-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.138: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:26:19.138: INFO: kube-controller-manager-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.138: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:26:19.138: INFO: kube-scheduler-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.138: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:26:19.138: INFO: kube-proxy-2p5b6 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.138: INFO: Container kube-proxy ready: true, restart count 2 W0504 16:26:19.153515 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:26:19.178: INFO: Latency metrics for node master3 May 4 16:26:19.178: INFO: Logging node info for node node1 May 4 16:26:19.181: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 b8cf7e16-d5c7-4e2c-996a-93d93bd4fa1c 42447 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"e2:50:df:03:d2:13"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:54:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:26:15 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:26:15 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:26:15 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:26:15 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bffc023a4ab84df0b0181bc7b8f509e2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:dc08af72-abca-4f1d-bd0f-0e8d8eb97de5,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[@ :],SizeBytes:1002569035,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:371dc6bf7e0c7ce112a29341b000c40d840aef1dbb4fdcb3ae5c0597e28f3061 golang:alpine3.12],SizeBytes:301097267,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:26:19.182: INFO: Logging kubelet events for node node1 May 4 16:26:19.184: INFO: Logging pods the kubelet thinks is on node node1 May 4 16:26:19.199: INFO: node-exporter-k8qd9 started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:26:19.199: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:26:19.199: INFO: Container node-exporter ready: true, restart count 0 May 4 16:26:19.199: INFO: test-webserver-9b461c4f-7d52-4db1-9027-4951689fb2b4 started at 2021-05-04 16:22:21 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.199: INFO: Container test-webserver ready: true, restart count 0 May 4 16:26:19.199: INFO: kube-flannel-d6pbl started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:26:19.199: INFO: Init container install-cni ready: true, restart count 2 May 4 16:26:19.199: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:26:19.199: INFO: node-feature-discovery-worker-wfgl5 started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.199: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:26:19.199: INFO: prometheus-operator-5bb8cb9d8f-rrrhf started at 2021-05-04 14:56:03 +0000 UTC (0+2 container statuses recorded) May 4 16:26:19.199: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:26:19.199: INFO: Container prometheus-operator ready: true, restart count 0 May 4 16:26:19.199: INFO: collectd-4755t started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:26:19.199: INFO: Container collectd ready: true, restart count 0 May 4 16:26:19.199: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:26:19.199: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:26:19.199: INFO: fail-once-local-ltx4r started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.199: INFO: Container c ready: false, restart count 0 May 4 16:26:19.199: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.199: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:26:19.199: INFO: nginx-proxy-node1 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.199: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:26:19.199: INFO: kube-proxy-t2mbn started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.199: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:26:19.199: INFO: liveness-http started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.199: INFO: Container liveness-http ready: false, restart count 19 May 4 16:26:19.199: INFO: cmk-init-discover-node1-m8vvw started at 2021-05-04 14:54:32 +0000 UTC (0+3 container statuses recorded) May 4 16:26:19.199: INFO: Container discover ready: false, restart count 0 May 4 16:26:19.199: INFO: Container init ready: false, restart count 0 May 4 16:26:19.199: INFO: Container install ready: false, restart count 0 May 4 16:26:19.199: INFO: busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01 started at 2021-05-04 16:21:18 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.199: INFO: Container busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01 ready: false, restart count 0 May 4 16:26:19.199: INFO: kube-multus-ds-amd64-pkmbz started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.199: INFO: Container kube-multus ready: true, restart count 1 May 4 16:26:19.199: INFO: pod-sharedvolume-86936850-dee1-46bf-8b03-52287eae813c started at 2021-05-04 16:22:19 +0000 UTC (0+2 container statuses recorded) May 4 16:26:19.199: INFO: Container busybox-main-container ready: false, restart count 0 May 4 16:26:19.199: INFO: Container busybox-sub-container ready: false, restart count 0 May 4 16:26:19.199: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.199: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 16:26:19.199: INFO: cmk-slg76 started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:26:19.199: INFO: Container nodereport ready: true, restart count 0 May 4 16:26:19.199: INFO: Container reconcile ready: true, restart count 0 May 4 16:26:19.199: INFO: prometheus-k8s-0 started at 2021-05-04 14:56:12 +0000 UTC (0+5 container statuses recorded) May 4 16:26:19.199: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:26:19.199: INFO: Container grafana ready: true, restart count 0 May 4 16:26:19.199: INFO: Container prometheus ready: true, restart count 1 May 4 16:26:19.199: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:26:19.199: INFO: Container rules-configmap-reloader ready: true, restart count 0 W0504 16:26:19.212939 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:26:19.246: INFO: Latency metrics for node node1 May 4 16:26:19.246: INFO: Logging node info for node node2 May 4 16:26:19.248: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 41567fa7-bb24-4381-9387-e4115195037d 42436 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:98:f5:3b:98:5c"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:26:13 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:26:13 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:26:13 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:26:13 +0000 UTC,LastTransitionTime:2021-05-04 14:45:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d47a1c7ab17f44f2ae7ff788700a8d74,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:df3523a1-a74f-4f8b-beb1-29f5ed8699f3,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:26:19.249: INFO: Logging kubelet events for node node2 May 4 16:26:19.251: INFO: Logging pods the kubelet thinks is on node node2 May 4 16:26:19.265: INFO: kube-flannel-lnwkk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:26:19.265: INFO: Init container install-cni ready: true, restart count 2 May 4 16:26:19.265: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:26:19.265: INFO: cmk-webhook-6c9d5f8578-fr595 started at 2021-05-04 14:55:15 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.265: INFO: Container cmk-webhook ready: true, restart count 0 May 4 16:26:19.265: INFO: ss-0 started at 2021-05-04 16:17:34 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.265: INFO: Container webserver ready: false, restart count 0 May 4 16:26:19.265: INFO: var-expansion-792ff743-a8c5-4f3a-94b8-4968bd4cf720 started at 2021-05-04 16:25:54 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.265: INFO: Container dapi-container ready: false, restart count 0 May 4 16:26:19.265: INFO: kube-proxy-rfjjf started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.265: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:26:19.265: INFO: node-feature-discovery-worker-jzjqs started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.265: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:26:19.265: INFO: node-exporter-5lghf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:26:19.265: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:26:19.265: INFO: Container node-exporter ready: true, restart count 0 May 4 16:26:19.265: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x started at 2021-05-04 14:59:02 +0000 UTC (0+2 container statuses recorded) May 4 16:26:19.265: INFO: Container tas-controller ready: true, restart count 0 May 4 16:26:19.265: INFO: Container tas-extender ready: true, restart count 0 May 4 16:26:19.265: INFO: foo-9dkvq started at 2021-05-04 16:25:24 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.265: INFO: Container c ready: false, restart count 0 May 4 16:26:19.265: INFO: liveness-exec started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.265: INFO: Container liveness-exec ready: false, restart count 6 May 4 16:26:19.265: INFO: kube-multus-ds-amd64-7r2s4 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.265: INFO: Container kube-multus ready: true, restart count 1 May 4 16:26:19.265: INFO: cmk-init-discover-node2-zlxzj started at 2021-05-04 14:54:52 +0000 UTC (0+3 container statuses recorded) May 4 16:26:19.265: INFO: Container discover ready: false, restart count 0 May 4 16:26:19.265: INFO: Container init ready: false, restart count 0 May 4 16:26:19.265: INFO: Container install ready: false, restart count 0 May 4 16:26:19.265: INFO: collectd-dhwfp started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:26:19.265: INFO: Container collectd ready: true, restart count 0 May 4 16:26:19.265: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:26:19.265: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:26:19.265: INFO: fail-once-local-bkr6m started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.265: INFO: Container c ready: false, restart count 0 May 4 16:26:19.265: INFO: termination-message-containere34b0019-bcb9-4fa0-9c1b-5eb0017d80c4 started at 2021-05-04 16:22:03 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.265: INFO: Container termination-message-container ready: false, restart count 0 May 4 16:26:19.265: INFO: foo-sxtvr started at 2021-05-04 16:25:24 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.265: INFO: Container c ready: false, restart count 0 May 4 16:26:19.265: INFO: nginx-proxy-node2 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.265: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:26:19.265: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.265: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 4 16:26:19.265: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.265: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:26:19.265: INFO: cmk-2fmbx started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:26:19.265: INFO: Container nodereport ready: true, restart count 0 May 4 16:26:19.265: INFO: Container reconcile ready: true, restart count 0 May 4 16:26:19.265: INFO: pod-exec-websocket-2863438c-c2df-4c3c-9cd1-2b53e8002946 started at 2021-05-04 16:22:49 +0000 UTC (0+1 container statuses recorded) May 4 16:26:19.265: INFO: Container main ready: false, restart count 0 W0504 16:26:19.282233 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:26:19.325: INFO: Latency metrics for node node2 May 4 16:26:19.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3814" for this suite. • Failure [300.987 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a pod with readOnlyRootFilesystem /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:26:18.990: wait for pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01" to succeed Expected success, but got an error: <*errors.errorString | 0xc00582a7e0>: { s: "Gave up after waiting 5m0s for pod \"busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01\" to be \"Succeeded or Failed\"", } Gave up after waiting 5m0s for pod "busybox-readonly-false-12e191e5-ff02-4f5c-8645-7b0e6b4f0c01" to be "Succeeded or Failed" /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:212 ------------------------------ {"msg":"FAILED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":734,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]"]} [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:26:19.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all May 4 16:26:19.381: INFO: Waiting up to 5m0s for pod "client-containers-30d4ceb5-d685-4509-b0c2-e50dc9471c91" in namespace "containers-6931" to be "Succeeded or Failed" May 4 16:26:19.385: INFO: Pod "client-containers-30d4ceb5-d685-4509-b0c2-e50dc9471c91": Phase="Pending", Reason="", readiness=false. Elapsed: 3.949353ms May 4 16:26:21.389: INFO: Pod "client-containers-30d4ceb5-d685-4509-b0c2-e50dc9471c91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007552895s May 4 16:26:23.394: INFO: Pod "client-containers-30d4ceb5-d685-4509-b0c2-e50dc9471c91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013005093s STEP: Saw pod success May 4 16:26:23.394: INFO: Pod "client-containers-30d4ceb5-d685-4509-b0c2-e50dc9471c91" satisfied condition "Succeeded or Failed" May 4 16:26:23.397: INFO: Trying to get logs from node node1 pod client-containers-30d4ceb5-d685-4509-b0c2-e50dc9471c91 container test-container: STEP: delete the pod May 4 16:26:23.410: INFO: Waiting for pod client-containers-30d4ceb5-d685-4509-b0c2-e50dc9471c91 to disappear May 4 16:26:23.412: INFO: Pod client-containers-30d4ceb5-d685-4509-b0c2-e50dc9471c91 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:26:23.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6931" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":734,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:22:21.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-9b461c4f-7d52-4db1-9027-4951689fb2b4 in namespace container-probe-5555 May 4 16:22:25.248: INFO: Started pod test-webserver-9b461c4f-7d52-4db1-9027-4951689fb2b4 in namespace container-probe-5555 STEP: checking the pod's current state and verifying that restartCount is present May 4 16:22:25.251: INFO: Initial restart count of pod test-webserver-9b461c4f-7d52-4db1-9027-4951689fb2b4 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:26:25.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5555" for this suite. • [SLOW TEST:244.541 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":604,"failed":2,"failures":["[sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","[k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:26:10.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:26:27.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1466" for this suite. • [SLOW TEST:17.065 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":30,"skipped":419,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","[sig-apps] ReplicationController should adopt matching pods on creation [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:26:23.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium May 4 16:26:23.502: INFO: Waiting up to 5m0s for pod "pod-ab04217a-56d5-4da7-bcd9-c9a2f301685e" in namespace "emptydir-8826" to be "Succeeded or Failed" May 4 16:26:23.506: INFO: Pod "pod-ab04217a-56d5-4da7-bcd9-c9a2f301685e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.752126ms May 4 16:26:25.508: INFO: Pod "pod-ab04217a-56d5-4da7-bcd9-c9a2f301685e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006652567s May 4 16:26:27.512: INFO: Pod "pod-ab04217a-56d5-4da7-bcd9-c9a2f301685e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010263471s May 4 16:26:29.515: INFO: Pod "pod-ab04217a-56d5-4da7-bcd9-c9a2f301685e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013600526s STEP: Saw pod success May 4 16:26:29.515: INFO: Pod "pod-ab04217a-56d5-4da7-bcd9-c9a2f301685e" satisfied condition "Succeeded or Failed" May 4 16:26:29.518: INFO: Trying to get logs from node node1 pod pod-ab04217a-56d5-4da7-bcd9-c9a2f301685e container test-container: STEP: delete the pod May 4 16:26:29.531: INFO: Waiting for pod pod-ab04217a-56d5-4da7-bcd9-c9a2f301685e to disappear May 4 16:26:29.533: INFO: Pod pod-ab04217a-56d5-4da7-bcd9-c9a2f301685e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:26:29.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8826" for this suite. • [SLOW TEST:6.084 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":753,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:26:25.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:26:25.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8731 create -f -' May 4 16:26:26.108: INFO: stderr: "" May 4 16:26:26.108: INFO: stdout: "replicationcontroller/agnhost-primary created\n" May 4 16:26:26.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8731 create -f -' May 4 16:26:26.352: INFO: stderr: "" May 4 16:26:26.352: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. May 4 16:26:27.356: INFO: Selector matched 1 pods for map[app:agnhost] May 4 16:26:27.356: INFO: Found 0 / 1 May 4 16:26:28.357: INFO: Selector matched 1 pods for map[app:agnhost] May 4 16:26:28.357: INFO: Found 0 / 1 May 4 16:26:29.357: INFO: Selector matched 1 pods for map[app:agnhost] May 4 16:26:29.357: INFO: Found 1 / 1 May 4 16:26:29.357: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 4 16:26:29.359: INFO: Selector matched 1 pods for map[app:agnhost] May 4 16:26:29.359: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 4 16:26:29.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8731 describe pod agnhost-primary-dcfll' May 4 16:26:29.559: INFO: stderr: "" May 4 16:26:29.559: INFO: stdout: "Name: agnhost-primary-dcfll\nNamespace: kubectl-8731\nPriority: 0\nNode: node2/10.10.190.208\nStart Time: Tue, 04 May 2021 16:26:26 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.2\"\n ],\n \"mac\": \"9a:1a:22:bd:df:01\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.2\"\n ],\n \"mac\": \"9a:1a:22:bd:df:01\",\n \"default\": true,\n \"dns\": {}\n }]\n kubernetes.io/psp: collectd\nStatus: Running\nIP: 10.244.3.2\nIPs:\n IP: 10.244.3.2\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: docker://be9cb18f0e6ce353dcf8c88438f558106be5d30ec26aaa5107e5dbbbadd97f07\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 04 May 2021 16:26:28 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-b8jb7 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-b8jb7:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-b8jb7\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-8731/agnhost-primary-dcfll to node2\n Normal AddedInterface 2s multus Add eth0 [10.244.3.2/24]\n Normal Pulling 2s kubelet Pulling image \"k8s.gcr.io/e2e-test-images/agnhost:2.20\"\n Normal Pulled 1s kubelet Successfully pulled image \"k8s.gcr.io/e2e-test-images/agnhost:2.20\" in 498.751778ms\n Normal Created 1s kubelet Created container agnhost-primary\n Normal Started 1s kubelet Started container agnhost-primary\n" May 4 16:26:29.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8731 describe rc agnhost-primary' May 4 16:26:29.756: INFO: stderr: "" May 4 16:26:29.756: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-8731\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-primary-dcfll\n" May 4 16:26:29.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8731 describe service agnhost-primary' May 4 16:26:29.914: INFO: stderr: "" May 4 16:26:29.914: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-8731\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP: 10.233.41.182\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.3.2:6379\nSession Affinity: None\nEvents: \n" May 4 16:26:29.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8731 describe node master1' May 4 16:26:30.098: INFO: stderr: "" May 4 16:26:30.098: INFO: stdout: "Name: master1\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=master1\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: flannel.alpha.coreos.com/backend-data: {\"VtepMAC\":\"3e:f0:43:cb:66:52\"}\n flannel.alpha.coreos.com/backend-type: vxlan\n flannel.alpha.coreos.com/kube-subnet-manager: true\n flannel.alpha.coreos.com/public-ip: 10.10.190.202\n kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n nfd.node.kubernetes.io/master.version: v0.7.0\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Tue, 04 May 2021 14:43:01 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: master1\n AcquireTime: \n RenewTime: Tue, 04 May 2021 16:26:20 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Tue, 04 May 2021 14:47:46 +0000 Tue, 04 May 2021 14:47:46 +0000 FlannelIsUp Flannel is running on this node\n MemoryPressure False Tue, 04 May 2021 16:26:20 +0000 Tue, 04 May 2021 14:43:01 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 04 May 2021 16:26:20 +0000 Tue, 04 May 2021 14:43:01 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 04 May 2021 16:26:20 +0000 Tue, 04 May 2021 14:43:01 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 04 May 2021 16:26:20 +0000 Tue, 04 May 2021 14:47:24 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 10.10.190.202\n Hostname: master1\nCapacity:\n cpu: 80\n ephemeral-storage: 439913340Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 196518328Ki\n pods: 110\nAllocatable:\n cpu: 79550m\n ephemeral-storage: 405424133473\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 195629496Ki\n pods: 110\nSystem Info:\n Machine ID: 88a0771919594d4187f6704fc7592bf8\n System UUID: 00ACFB60-0631-E711-906E-0017A4403562\n Boot ID: 8e0a253b-2aa4-4467-879e-567e7ba1ffa4\n Kernel Version: 3.10.0-1160.25.1.el7.x86_64\n OS Image: CentOS Linux 7 (Core)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://19.3.14\n Kubelet Version: v1.19.8\n Kube-Proxy Version: v1.19.8\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (10 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-7677f9bb54-qvcd2 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 100m\n kube-system docker-registry-docker-registry-56cbc7bc58-zhf8t 0 (0%) 0 (0%) 0 (0%) 0 (0%) 97m\n kube-system kube-apiserver-master1 250m (0%) 0 (0%) 0 (0%) 0 (0%) 94m\n kube-system kube-controller-manager-master1 200m (0%) 0 (0%) 0 (0%) 0 (0%) 102m\n kube-system kube-flannel-qspzk 150m (0%) 300m (0%) 64M (0%) 500M (0%) 100m\n kube-system kube-multus-ds-amd64-jflvf 100m (0%) 100m (0%) 90Mi (0%) 90Mi (0%) 100m\n kube-system kube-proxy-8j6ch 0 (0%) 0 (0%) 0 (0%) 0 (0%) 101m\n kube-system kube-scheduler-master1 100m (0%) 0 (0%) 0 (0%) 0 (0%) 87m\n kube-system node-feature-discovery-controller-5bf5c49849-72rn6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 94m\n monitoring node-exporter-jckjs 112m (0%) 270m (0%) 200Mi (0%) 220Mi (0%) 90m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 1012m (1%) 670m (0%)\n memory 431140Ki (0%) 1003316480 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" May 4 16:26:30.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8731 describe namespace kubectl-8731' May 4 16:26:30.247: INFO: stderr: "" May 4 16:26:30.247: INFO: stdout: "Name: kubectl-8731\nLabels: e2e-framework=kubectl\n e2e-run=4fd0464a-d9ae-4cab-b7a0-c34d58a74eea\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:26:30.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8731" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":-1,"completed":34,"skipped":631,"failed":2,"failures":["[sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","[k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:26:27.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-a67c6dd5-5e88-4fc6-b8d1-ffa276b2f62c STEP: Creating a pod to test consume secrets May 4 16:26:27.937: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-17446e24-a3bb-46de-9a46-95ddf5615e57" in namespace "projected-3329" to be "Succeeded or Failed" May 4 16:26:27.942: INFO: Pod "pod-projected-secrets-17446e24-a3bb-46de-9a46-95ddf5615e57": Phase="Pending", Reason="", readiness=false. Elapsed: 4.502334ms May 4 16:26:29.945: INFO: Pod "pod-projected-secrets-17446e24-a3bb-46de-9a46-95ddf5615e57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007826321s May 4 16:26:31.950: INFO: Pod "pod-projected-secrets-17446e24-a3bb-46de-9a46-95ddf5615e57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012298907s STEP: Saw pod success May 4 16:26:31.950: INFO: Pod "pod-projected-secrets-17446e24-a3bb-46de-9a46-95ddf5615e57" satisfied condition "Succeeded or Failed" May 4 16:26:31.952: INFO: Trying to get logs from node node2 pod pod-projected-secrets-17446e24-a3bb-46de-9a46-95ddf5615e57 container secret-volume-test: STEP: delete the pod May 4 16:26:31.966: INFO: Waiting for pod pod-projected-secrets-17446e24-a3bb-46de-9a46-95ddf5615e57 to disappear May 4 16:26:31.969: INFO: Pod pod-projected-secrets-17446e24-a3bb-46de-9a46-95ddf5615e57 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:26:31.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3329" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":423,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","[sig-apps] ReplicationController should adopt matching pods on creation [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:26:29.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 4 16:26:30.229: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 4 16:26:32.237: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755742390, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755742390, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755742390, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755742390, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 16:26:34.242: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755742390, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755742390, loc:(*time.Location)(0x770c940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755742390, loc:(*time.Location)(0x770c940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755742390, loc:(*time.Location)(0x770c940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 4 16:26:37.247: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:26:37.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2323" for this suite. STEP: Destroying namespace "webhook-2323-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.797 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":41,"skipped":757,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:26:32.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-2975 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2975 to expose endpoints map[] May 4 16:26:32.044: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found May 4 16:26:33.050: INFO: successfully validated that service endpoint-test2 in namespace services-2975 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-2975 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2975 to expose endpoints map[pod1:[80]] May 4 16:26:37.073: INFO: successfully validated that service endpoint-test2 in namespace services-2975 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-2975 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2975 to expose endpoints map[pod1:[80] pod2:[80]] May 4 16:26:40.095: INFO: successfully validated that service endpoint-test2 in namespace services-2975 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-2975 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2975 to expose endpoints map[pod2:[80]] May 4 16:26:40.111: INFO: successfully validated that service endpoint-test2 in namespace services-2975 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-2975 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2975 to expose endpoints map[] May 4 16:26:40.122: INFO: successfully validated that service endpoint-test2 in namespace services-2975 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:26:40.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2975" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:8.128 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":-1,"completed":32,"skipped":438,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","[sig-apps] ReplicationController should adopt matching pods on creation [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:26:37.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 4 16:26:37.391: INFO: Waiting up to 5m0s for pod "pod-8a3e0f12-f4f7-43ea-bdb1-fa7c6d27a86c" in namespace "emptydir-2567" to be "Succeeded or Failed" May 4 16:26:37.393: INFO: Pod "pod-8a3e0f12-f4f7-43ea-bdb1-fa7c6d27a86c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.682322ms May 4 16:26:39.397: INFO: Pod "pod-8a3e0f12-f4f7-43ea-bdb1-fa7c6d27a86c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005892804s May 4 16:26:41.400: INFO: Pod "pod-8a3e0f12-f4f7-43ea-bdb1-fa7c6d27a86c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009154809s STEP: Saw pod success May 4 16:26:41.400: INFO: Pod "pod-8a3e0f12-f4f7-43ea-bdb1-fa7c6d27a86c" satisfied condition "Succeeded or Failed" May 4 16:26:41.402: INFO: Trying to get logs from node node1 pod pod-8a3e0f12-f4f7-43ea-bdb1-fa7c6d27a86c container test-container: STEP: delete the pod May 4 16:26:41.421: INFO: Waiting for pod pod-8a3e0f12-f4f7-43ea-bdb1-fa7c6d27a86c to disappear May 4 16:26:41.423: INFO: Pod pod-8a3e0f12-f4f7-43ea-bdb1-fa7c6d27a86c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:26:41.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2567" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":762,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:26:41.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 4 16:26:41.512: INFO: Pod name pod-release: Found 0 pods out of 1 May 4 16:26:46.518: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:26:47.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-956" for this suite. • [SLOW TEST:6.063 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":43,"skipped":795,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]"]} SSSS ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":464,"failed":3,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","[k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]"]} [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:22:03.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded May 4 16:27:03.163: FAIL: Timed out after 300.000s. Expected : Pending to equal : Succeeded Full Stack Trace k8s.io/kubernetes/test/e2e/common.glob..func25.1.2.1(0x4c5d8a0, 0x1d, 0xc0004ec0c0, 0x1e, 0xc0046deae0, 0x2, 0x2, 0xc004706570, 0x1, 0x1, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:154 +0x3f1 k8s.io/kubernetes/test/e2e/common.glob..func25.1.2.6() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:262 +0x1d2 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002947080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc002947080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc002947080, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "container-runtime-7490". STEP: Found 9 events. May 4 16:27:03.175: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for termination-message-containere34b0019-bcb9-4fa0-9c1b-5eb0017d80c4: { } Scheduled: Successfully assigned container-runtime-7490/termination-message-containere34b0019-bcb9-4fa0-9c1b-5eb0017d80c4 to node2 May 4 16:27:03.175: INFO: At 2021-05-04 16:22:04 +0000 UTC - event for termination-message-containere34b0019-bcb9-4fa0-9c1b-5eb0017d80c4: {multus } AddedInterface: Add eth0 [10.244.3.247/24] May 4 16:27:03.175: INFO: At 2021-05-04 16:22:04 +0000 UTC - event for termination-message-containere34b0019-bcb9-4fa0-9c1b-5eb0017d80c4: {kubelet node2} Pulling: Pulling image "docker.io/library/busybox:1.29" May 4 16:27:03.175: INFO: At 2021-05-04 16:22:05 +0000 UTC - event for termination-message-containere34b0019-bcb9-4fa0-9c1b-5eb0017d80c4: {kubelet node2} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 4 16:27:03.175: INFO: At 2021-05-04 16:22:05 +0000 UTC - event for termination-message-containere34b0019-bcb9-4fa0-9c1b-5eb0017d80c4: {kubelet node2} Failed: Error: ErrImagePull May 4 16:27:03.175: INFO: At 2021-05-04 16:22:06 +0000 UTC - event for termination-message-containere34b0019-bcb9-4fa0-9c1b-5eb0017d80c4: {kubelet node2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 4 16:27:03.175: INFO: At 2021-05-04 16:22:08 +0000 UTC - event for termination-message-containere34b0019-bcb9-4fa0-9c1b-5eb0017d80c4: {multus } AddedInterface: Add eth0 [10.244.3.248/24] May 4 16:27:03.175: INFO: At 2021-05-04 16:22:08 +0000 UTC - event for termination-message-containere34b0019-bcb9-4fa0-9c1b-5eb0017d80c4: {kubelet node2} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 4 16:27:03.175: INFO: At 2021-05-04 16:22:08 +0000 UTC - event for termination-message-containere34b0019-bcb9-4fa0-9c1b-5eb0017d80c4: {kubelet node2} Failed: Error: ImagePullBackOff May 4 16:27:03.177: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:27:03.177: INFO: May 4 16:27:03.181: INFO: Logging node info for node master1 May 4 16:27:03.184: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 db982204-549e-4532-90a7-a4410878cfc9 43150 0 2021-05-04 14:43:01 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"3e:f0:43:cb:66:52"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-04 14:51:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:46 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:00 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:00 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:00 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:27:00 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:88a0771919594d4187f6704fc7592bf8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8e0a253b-2aa4-4467-879e-567e7ba1ffa4,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:27:03.184: INFO: Logging kubelet events for node master1 May 4 16:27:03.186: INFO: Logging pods the kubelet thinks is on node master1 May 4 16:27:03.198: INFO: kube-controller-manager-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.198: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:27:03.198: INFO: kube-proxy-8j6ch started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.198: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:27:03.198: INFO: docker-registry-docker-registry-56cbc7bc58-zhf8t started at 2021-05-04 14:48:42 +0000 UTC (0+2 container statuses recorded) May 4 16:27:03.198: INFO: Container docker-registry ready: true, restart count 0 May 4 16:27:03.198: INFO: Container nginx ready: true, restart count 0 May 4 16:27:03.198: INFO: node-exporter-jckjs started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:27:03.198: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:27:03.198: INFO: Container node-exporter ready: true, restart count 0 May 4 16:27:03.198: INFO: kube-scheduler-master1 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.198: INFO: Container kube-scheduler ready: true, restart count 0 May 4 16:27:03.198: INFO: kube-apiserver-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.198: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:27:03.198: INFO: kube-flannel-qspzk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:27:03.198: INFO: Init container install-cni ready: true, restart count 0 May 4 16:27:03.198: INFO: Container kube-flannel ready: true, restart count 3 May 4 16:27:03.198: INFO: kube-multus-ds-amd64-jflvf started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.198: INFO: Container kube-multus ready: true, restart count 1 May 4 16:27:03.198: INFO: coredns-7677f9bb54-qvcd2 started at 2021-05-04 14:46:11 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.198: INFO: Container coredns ready: true, restart count 1 May 4 16:27:03.198: INFO: node-feature-discovery-controller-5bf5c49849-72rn6 started at 2021-05-04 14:51:52 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.198: INFO: Container nfd-controller ready: true, restart count 0 W0504 16:27:03.211255 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:27:03.234: INFO: Latency metrics for node master1 May 4 16:27:03.234: INFO: Logging node info for node master2 May 4 16:27:03.236: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 e2c15170-247b-4e7b-b818-abc807948bf8 43147 0 2021-05-04 14:43:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:e0:10:a0:e0:62"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:32 +0000 UTC,LastTransitionTime:2021-05-04 14:47:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:00 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:00 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:00 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:27:00 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6af568f56589422a9bd68e0270ce0f8c,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:bf27bb77-fad2-4b52-85c3-acb5113fc512,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:27:03.237: INFO: Logging kubelet events for node master2 May 4 16:27:03.239: INFO: Logging pods the kubelet thinks is on node master2 May 4 16:27:03.246: INFO: kube-controller-manager-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.246: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:27:03.246: INFO: kube-scheduler-master2 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.246: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:27:03.246: INFO: kube-proxy-6b5t8 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.246: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:27:03.246: INFO: kube-flannel-cxdfr started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:27:03.246: INFO: Init container install-cni ready: true, restart count 0 May 4 16:27:03.246: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:27:03.246: INFO: kube-multus-ds-amd64-dw8tg started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.246: INFO: Container kube-multus ready: true, restart count 1 May 4 16:27:03.246: INFO: dns-autoscaler-5b7b5c9b6f-zbrsq started at 2021-05-04 14:46:08 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.246: INFO: Container autoscaler ready: true, restart count 1 May 4 16:27:03.246: INFO: node-exporter-9c6qf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:27:03.246: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:27:03.246: INFO: Container node-exporter ready: true, restart count 0 May 4 16:27:03.246: INFO: kube-apiserver-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.246: INFO: Container kube-apiserver ready: true, restart count 0 W0504 16:27:03.258604 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:27:03.282: INFO: Latency metrics for node master2 May 4 16:27:03.282: INFO: Logging node info for node master3 May 4 16:27:03.285: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 b533a646-667e-403c-944b-71dec9cc4851 43146 0 2021-05-04 14:43:51 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:b0:53:14:f6:c9"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:48:50 +0000 UTC,LastTransitionTime:2021-05-04 14:48:50 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:00 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:00 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:00 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:27:00 +0000 UTC,LastTransitionTime:2021-05-04 14:45:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:959373dcda56494486f0c2bb0bb496cc,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:0714ca81-c21e-40d6-a288-48d597238e54,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:27:03.286: INFO: Logging kubelet events for node master3 May 4 16:27:03.288: INFO: Logging pods the kubelet thinks is on node master3 May 4 16:27:03.296: INFO: kube-flannel-wznt8 started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:27:03.296: INFO: Init container install-cni ready: true, restart count 0 May 4 16:27:03.296: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:27:03.296: INFO: kube-multus-ds-amd64-cgwz2 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.296: INFO: Container kube-multus ready: true, restart count 1 May 4 16:27:03.296: INFO: coredns-7677f9bb54-pshfb started at 2021-05-04 14:46:06 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.296: INFO: Container coredns ready: true, restart count 1 May 4 16:27:03.296: INFO: node-exporter-wvppn started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:27:03.296: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:27:03.296: INFO: Container node-exporter ready: true, restart count 0 May 4 16:27:03.296: INFO: kube-apiserver-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.296: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:27:03.296: INFO: kube-controller-manager-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.296: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:27:03.296: INFO: kube-scheduler-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.296: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:27:03.296: INFO: kube-proxy-2p5b6 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.296: INFO: Container kube-proxy ready: true, restart count 2 W0504 16:27:03.310317 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:27:03.334: INFO: Latency metrics for node master3 May 4 16:27:03.334: INFO: Logging node info for node node1 May 4 16:27:03.337: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 b8cf7e16-d5c7-4e2c-996a-93d93bd4fa1c 43131 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"e2:50:df:03:d2:13"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:54:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:26:55 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:26:55 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:26:55 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:26:55 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bffc023a4ab84df0b0181bc7b8f509e2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:dc08af72-abca-4f1d-bd0f-0e8d8eb97de5,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[@ :],SizeBytes:1002569035,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:371dc6bf7e0c7ce112a29341b000c40d840aef1dbb4fdcb3ae5c0597e28f3061 golang:alpine3.12],SizeBytes:301097267,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:27:03.338: INFO: Logging kubelet events for node node1 May 4 16:27:03.340: INFO: Logging pods the kubelet thinks is on node node1 May 4 16:27:03.353: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.353: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 16:27:03.353: INFO: cmk-slg76 started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:27:03.353: INFO: Container nodereport ready: true, restart count 0 May 4 16:27:03.353: INFO: Container reconcile ready: true, restart count 0 May 4 16:27:03.353: INFO: prometheus-k8s-0 started at 2021-05-04 14:56:12 +0000 UTC (0+5 container statuses recorded) May 4 16:27:03.353: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:27:03.353: INFO: Container grafana ready: true, restart count 0 May 4 16:27:03.353: INFO: Container prometheus ready: true, restart count 1 May 4 16:27:03.353: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:27:03.353: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 4 16:27:03.353: INFO: pod-sharedvolume-86936850-dee1-46bf-8b03-52287eae813c started at 2021-05-04 16:22:19 +0000 UTC (0+2 container statuses recorded) May 4 16:27:03.353: INFO: Container busybox-main-container ready: false, restart count 0 May 4 16:27:03.353: INFO: Container busybox-sub-container ready: false, restart count 0 May 4 16:27:03.353: INFO: kube-flannel-d6pbl started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:27:03.353: INFO: Init container install-cni ready: true, restart count 2 May 4 16:27:03.353: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:27:03.353: INFO: node-feature-discovery-worker-wfgl5 started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.353: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:27:03.353: INFO: prometheus-operator-5bb8cb9d8f-rrrhf started at 2021-05-04 14:56:03 +0000 UTC (0+2 container statuses recorded) May 4 16:27:03.353: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:27:03.353: INFO: Container prometheus-operator ready: true, restart count 0 May 4 16:27:03.353: INFO: node-exporter-k8qd9 started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:27:03.353: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:27:03.353: INFO: Container node-exporter ready: true, restart count 0 May 4 16:27:03.353: INFO: collectd-4755t started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:27:03.353: INFO: Container collectd ready: true, restart count 0 May 4 16:27:03.353: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:27:03.353: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:27:03.353: INFO: fail-once-local-ltx4r started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.353: INFO: Container c ready: false, restart count 0 May 4 16:27:03.353: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.353: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:27:03.353: INFO: busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0 started at 2021-05-04 16:26:47 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.353: INFO: Container busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0 ready: false, restart count 0 May 4 16:27:03.353: INFO: nginx-proxy-node1 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.353: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:27:03.353: INFO: kube-proxy-t2mbn started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.353: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:27:03.353: INFO: liveness-http started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.353: INFO: Container liveness-http ready: false, restart count 19 May 4 16:27:03.353: INFO: cmk-init-discover-node1-m8vvw started at 2021-05-04 14:54:32 +0000 UTC (0+3 container statuses recorded) May 4 16:27:03.353: INFO: Container discover ready: false, restart count 0 May 4 16:27:03.353: INFO: Container init ready: false, restart count 0 May 4 16:27:03.353: INFO: Container install ready: false, restart count 0 May 4 16:27:03.353: INFO: kube-multus-ds-amd64-pkmbz started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.353: INFO: Container kube-multus ready: true, restart count 1 W0504 16:27:03.366396 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:27:03.397: INFO: Latency metrics for node node1 May 4 16:27:03.397: INFO: Logging node info for node node2 May 4 16:27:03.400: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 41567fa7-bb24-4381-9387-e4115195037d 43124 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:98:f5:3b:98:5c"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:26:53 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:26:53 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:26:53 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:26:53 +0000 UTC,LastTransitionTime:2021-05-04 14:45:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d47a1c7ab17f44f2ae7ff788700a8d74,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:df3523a1-a74f-4f8b-beb1-29f5ed8699f3,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:27:03.401: INFO: Logging kubelet events for node node2 May 4 16:27:03.403: INFO: Logging pods the kubelet thinks is on node node2 May 4 16:27:03.416: INFO: kube-flannel-lnwkk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:27:03.416: INFO: Init container install-cni ready: true, restart count 2 May 4 16:27:03.416: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:27:03.416: INFO: cmk-webhook-6c9d5f8578-fr595 started at 2021-05-04 14:55:15 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.416: INFO: Container cmk-webhook ready: true, restart count 0 May 4 16:27:03.416: INFO: ss-0 started at 2021-05-04 16:17:34 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.416: INFO: Container webserver ready: false, restart count 0 May 4 16:27:03.416: INFO: var-expansion-792ff743-a8c5-4f3a-94b8-4968bd4cf720 started at 2021-05-04 16:25:54 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.416: INFO: Container dapi-container ready: false, restart count 0 May 4 16:27:03.416: INFO: foo-9dkvq started at 2021-05-04 16:25:24 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.416: INFO: Container c ready: false, restart count 0 May 4 16:27:03.416: INFO: liveness-exec started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.416: INFO: Container liveness-exec ready: false, restart count 6 May 4 16:27:03.416: INFO: kube-proxy-rfjjf started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.416: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:27:03.416: INFO: node-feature-discovery-worker-jzjqs started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.416: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:27:03.416: INFO: node-exporter-5lghf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:27:03.416: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:27:03.416: INFO: Container node-exporter ready: true, restart count 0 May 4 16:27:03.416: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x started at 2021-05-04 14:59:02 +0000 UTC (0+2 container statuses recorded) May 4 16:27:03.416: INFO: Container tas-controller ready: true, restart count 0 May 4 16:27:03.416: INFO: Container tas-extender ready: true, restart count 0 May 4 16:27:03.416: INFO: kube-multus-ds-amd64-7r2s4 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.416: INFO: Container kube-multus ready: true, restart count 1 May 4 16:27:03.416: INFO: cmk-init-discover-node2-zlxzj started at 2021-05-04 14:54:52 +0000 UTC (0+3 container statuses recorded) May 4 16:27:03.416: INFO: Container discover ready: false, restart count 0 May 4 16:27:03.416: INFO: Container init ready: false, restart count 0 May 4 16:27:03.416: INFO: Container install ready: false, restart count 0 May 4 16:27:03.416: INFO: collectd-dhwfp started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:27:03.416: INFO: Container collectd ready: true, restart count 0 May 4 16:27:03.416: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:27:03.416: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:27:03.416: INFO: fail-once-local-bkr6m started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.416: INFO: Container c ready: false, restart count 0 May 4 16:27:03.416: INFO: foo-sxtvr started at 2021-05-04 16:25:24 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.416: INFO: Container c ready: false, restart count 0 May 4 16:27:03.416: INFO: nginx-proxy-node2 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.416: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:27:03.416: INFO: pod-service-account-55278b67-f302-4d69-b992-0113a6bbdd84 started at 2021-05-04 16:26:40 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.416: INFO: Container test ready: false, restart count 0 May 4 16:27:03.416: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.416: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 4 16:27:03.416: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.416: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:27:03.416: INFO: cmk-2fmbx started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:27:03.416: INFO: Container nodereport ready: true, restart count 0 May 4 16:27:03.416: INFO: Container reconcile ready: true, restart count 0 May 4 16:27:03.416: INFO: pod-exec-websocket-2863438c-c2df-4c3c-9cd1-2b53e8002946 started at 2021-05-04 16:22:49 +0000 UTC (0+1 container statuses recorded) May 4 16:27:03.416: INFO: Container main ready: false, restart count 0 W0504 16:27:03.428317 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:27:03.499: INFO: Latency metrics for node node2 May 4 16:27:03.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7490" for this suite. • Failure [300.379 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:27:03.163: Timed out after 300.000s. Expected : Pending to equal : Succeeded /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:154 ------------------------------ {"msg":"FAILED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":464,"failed":4,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","[k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","[k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:22:19.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod May 4 16:27:19.445: FAIL: Unexpected error: <*errors.errorString | 0xc0002c2200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*PodClient).CreateSync(0xc0048d37c0, 0xc002d64400, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:103 +0xfe k8s.io/kubernetes/test/e2e/common.glob..func8.16() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:283 +0xa58 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002d4d800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc002d4d800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc002d4d800, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "emptydir-424". STEP: Found 9 events. May 4 16:27:19.450: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-sharedvolume-86936850-dee1-46bf-8b03-52287eae813c: { } Scheduled: Successfully assigned emptydir-424/pod-sharedvolume-86936850-dee1-46bf-8b03-52287eae813c to node1 May 4 16:27:19.450: INFO: At 2021-05-04 16:22:20 +0000 UTC - event for pod-sharedvolume-86936850-dee1-46bf-8b03-52287eae813c: {multus } AddedInterface: Add eth0 [10.244.4.200/24] May 4 16:27:19.450: INFO: At 2021-05-04 16:22:20 +0000 UTC - event for pod-sharedvolume-86936850-dee1-46bf-8b03-52287eae813c: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 4 16:27:19.450: INFO: At 2021-05-04 16:22:21 +0000 UTC - event for pod-sharedvolume-86936850-dee1-46bf-8b03-52287eae813c: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 4 16:27:19.450: INFO: At 2021-05-04 16:22:21 +0000 UTC - event for pod-sharedvolume-86936850-dee1-46bf-8b03-52287eae813c: {kubelet node1} Failed: Error: ErrImagePull May 4 16:27:19.450: INFO: At 2021-05-04 16:22:21 +0000 UTC - event for pod-sharedvolume-86936850-dee1-46bf-8b03-52287eae813c: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 4 16:27:19.450: INFO: At 2021-05-04 16:22:21 +0000 UTC - event for pod-sharedvolume-86936850-dee1-46bf-8b03-52287eae813c: {kubelet node1} Failed: Error: ImagePullBackOff May 4 16:27:19.450: INFO: At 2021-05-04 16:22:22 +0000 UTC - event for pod-sharedvolume-86936850-dee1-46bf-8b03-52287eae813c: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 4 16:27:19.450: INFO: At 2021-05-04 16:22:22 +0000 UTC - event for pod-sharedvolume-86936850-dee1-46bf-8b03-52287eae813c: {kubelet node1} Failed: Error: ImagePullBackOff May 4 16:27:19.452: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:27:19.452: INFO: pod-sharedvolume-86936850-dee1-46bf-8b03-52287eae813c node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:22:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:22:19 +0000 UTC ContainersNotReady containers with unready status: [busybox-main-container busybox-sub-container]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:22:19 +0000 UTC ContainersNotReady containers with unready status: [busybox-main-container busybox-sub-container]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:22:19 +0000 UTC }] May 4 16:27:19.452: INFO: May 4 16:27:19.457: INFO: Logging node info for node master1 May 4 16:27:19.460: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 db982204-549e-4532-90a7-a4410878cfc9 43241 0 2021-05-04 14:43:01 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"3e:f0:43:cb:66:52"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-04 14:51:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:46 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:10 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:10 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:10 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:27:10 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:88a0771919594d4187f6704fc7592bf8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8e0a253b-2aa4-4467-879e-567e7ba1ffa4,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:27:19.460: INFO: Logging kubelet events for node master1 May 4 16:27:19.462: INFO: Logging pods the kubelet thinks is on node master1 May 4 16:27:19.472: INFO: node-exporter-jckjs started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:27:19.472: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:27:19.472: INFO: Container node-exporter ready: true, restart count 0 May 4 16:27:19.472: INFO: kube-scheduler-master1 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.472: INFO: Container kube-scheduler ready: true, restart count 0 May 4 16:27:19.472: INFO: kube-apiserver-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.472: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:27:19.472: INFO: kube-controller-manager-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.472: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:27:19.472: INFO: kube-proxy-8j6ch started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.472: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:27:19.472: INFO: docker-registry-docker-registry-56cbc7bc58-zhf8t started at 2021-05-04 14:48:42 +0000 UTC (0+2 container statuses recorded) May 4 16:27:19.472: INFO: Container docker-registry ready: true, restart count 0 May 4 16:27:19.472: INFO: Container nginx ready: true, restart count 0 May 4 16:27:19.472: INFO: node-feature-discovery-controller-5bf5c49849-72rn6 started at 2021-05-04 14:51:52 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.472: INFO: Container nfd-controller ready: true, restart count 0 May 4 16:27:19.472: INFO: kube-flannel-qspzk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:27:19.472: INFO: Init container install-cni ready: true, restart count 0 May 4 16:27:19.472: INFO: Container kube-flannel ready: true, restart count 3 May 4 16:27:19.472: INFO: kube-multus-ds-amd64-jflvf started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.472: INFO: Container kube-multus ready: true, restart count 1 May 4 16:27:19.472: INFO: coredns-7677f9bb54-qvcd2 started at 2021-05-04 14:46:11 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.472: INFO: Container coredns ready: true, restart count 1 W0504 16:27:19.483576 24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:27:19.507: INFO: Latency metrics for node master1 May 4 16:27:19.507: INFO: Logging node info for node master2 May 4 16:27:19.510: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 e2c15170-247b-4e7b-b818-abc807948bf8 43236 0 2021-05-04 14:43:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:e0:10:a0:e0:62"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:32 +0000 UTC,LastTransitionTime:2021-05-04 14:47:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:10 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:10 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:10 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:27:10 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6af568f56589422a9bd68e0270ce0f8c,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:bf27bb77-fad2-4b52-85c3-acb5113fc512,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:27:19.510: INFO: Logging kubelet events for node master2 May 4 16:27:19.512: INFO: Logging pods the kubelet thinks is on node master2 May 4 16:27:19.520: INFO: kube-scheduler-master2 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.520: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:27:19.520: INFO: kube-proxy-6b5t8 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.520: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:27:19.520: INFO: kube-flannel-cxdfr started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:27:19.520: INFO: Init container install-cni ready: true, restart count 0 May 4 16:27:19.520: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:27:19.520: INFO: kube-multus-ds-amd64-dw8tg started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.520: INFO: Container kube-multus ready: true, restart count 1 May 4 16:27:19.520: INFO: dns-autoscaler-5b7b5c9b6f-zbrsq started at 2021-05-04 14:46:08 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.520: INFO: Container autoscaler ready: true, restart count 1 May 4 16:27:19.520: INFO: node-exporter-9c6qf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:27:19.520: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:27:19.520: INFO: Container node-exporter ready: true, restart count 0 May 4 16:27:19.520: INFO: kube-apiserver-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.520: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:27:19.520: INFO: kube-controller-manager-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.520: INFO: Container kube-controller-manager ready: true, restart count 2 W0504 16:27:19.534942 24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:27:19.562: INFO: Latency metrics for node master2 May 4 16:27:19.562: INFO: Logging node info for node master3 May 4 16:27:19.565: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 b533a646-667e-403c-944b-71dec9cc4851 43235 0 2021-05-04 14:43:51 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:b0:53:14:f6:c9"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:48:50 +0000 UTC,LastTransitionTime:2021-05-04 14:48:50 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:10 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:10 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:10 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:27:10 +0000 UTC,LastTransitionTime:2021-05-04 14:45:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:959373dcda56494486f0c2bb0bb496cc,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:0714ca81-c21e-40d6-a288-48d597238e54,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:27:19.565: INFO: Logging kubelet events for node master3 May 4 16:27:19.569: INFO: Logging pods the kubelet thinks is on node master3 May 4 16:27:19.577: INFO: node-exporter-wvppn started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:27:19.577: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:27:19.577: INFO: Container node-exporter ready: true, restart count 0 May 4 16:27:19.577: INFO: kube-apiserver-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.577: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:27:19.577: INFO: kube-controller-manager-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.577: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:27:19.577: INFO: kube-scheduler-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.577: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:27:19.577: INFO: kube-proxy-2p5b6 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.577: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:27:19.577: INFO: kube-flannel-wznt8 started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:27:19.577: INFO: Init container install-cni ready: true, restart count 0 May 4 16:27:19.577: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:27:19.577: INFO: kube-multus-ds-amd64-cgwz2 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.577: INFO: Container kube-multus ready: true, restart count 1 May 4 16:27:19.577: INFO: coredns-7677f9bb54-pshfb started at 2021-05-04 14:46:06 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.577: INFO: Container coredns ready: true, restart count 1 W0504 16:27:19.591048 24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:27:19.613: INFO: Latency metrics for node master3 May 4 16:27:19.613: INFO: Logging node info for node node1 May 4 16:27:19.616: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 b8cf7e16-d5c7-4e2c-996a-93d93bd4fa1c 43279 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"e2:50:df:03:d2:13"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:54:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:16 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:16 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:16 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:27:16 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bffc023a4ab84df0b0181bc7b8f509e2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:dc08af72-abca-4f1d-bd0f-0e8d8eb97de5,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[@ :],SizeBytes:1002569035,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:371dc6bf7e0c7ce112a29341b000c40d840aef1dbb4fdcb3ae5c0597e28f3061 golang:alpine3.12],SizeBytes:301097267,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:27:19.616: INFO: Logging kubelet events for node node1 May 4 16:27:19.619: INFO: Logging pods the kubelet thinks is on node node1 May 4 16:27:19.633: INFO: node-feature-discovery-worker-wfgl5 started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.633: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:27:19.633: INFO: prometheus-operator-5bb8cb9d8f-rrrhf started at 2021-05-04 14:56:03 +0000 UTC (0+2 container statuses recorded) May 4 16:27:19.633: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:27:19.633: INFO: Container prometheus-operator ready: true, restart count 0 May 4 16:27:19.633: INFO: node-exporter-k8qd9 started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:27:19.633: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:27:19.633: INFO: Container node-exporter ready: true, restart count 0 May 4 16:27:19.633: INFO: kube-flannel-d6pbl started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:27:19.633: INFO: Init container install-cni ready: true, restart count 2 May 4 16:27:19.633: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:27:19.633: INFO: fail-once-local-ltx4r started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.633: INFO: Container c ready: false, restart count 0 May 4 16:27:19.633: INFO: collectd-4755t started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:27:19.633: INFO: Container collectd ready: true, restart count 0 May 4 16:27:19.633: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:27:19.633: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:27:19.633: INFO: busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0 started at 2021-05-04 16:26:47 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.633: INFO: Container busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0 ready: false, restart count 0 May 4 16:27:19.633: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.633: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:27:19.633: INFO: kube-proxy-t2mbn started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.634: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:27:19.634: INFO: liveness-http started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.634: INFO: Container liveness-http ready: true, restart count 20 May 4 16:27:19.634: INFO: nginx-proxy-node1 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.634: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:27:19.634: INFO: var-expansion-98b73d79-7107-4138-a06e-af820041f2eb started at 2021-05-04 16:27:03 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.634: INFO: Container dapi-container ready: false, restart count 0 May 4 16:27:19.634: INFO: cmk-init-discover-node1-m8vvw started at 2021-05-04 14:54:32 +0000 UTC (0+3 container statuses recorded) May 4 16:27:19.634: INFO: Container discover ready: false, restart count 0 May 4 16:27:19.634: INFO: Container init ready: false, restart count 0 May 4 16:27:19.634: INFO: Container install ready: false, restart count 0 May 4 16:27:19.634: INFO: kube-multus-ds-amd64-pkmbz started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.634: INFO: Container kube-multus ready: true, restart count 1 May 4 16:27:19.634: INFO: cmk-slg76 started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:27:19.634: INFO: Container nodereport ready: true, restart count 0 May 4 16:27:19.634: INFO: Container reconcile ready: true, restart count 0 May 4 16:27:19.634: INFO: prometheus-k8s-0 started at 2021-05-04 14:56:12 +0000 UTC (0+5 container statuses recorded) May 4 16:27:19.634: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:27:19.634: INFO: Container grafana ready: true, restart count 0 May 4 16:27:19.634: INFO: Container prometheus ready: true, restart count 1 May 4 16:27:19.634: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:27:19.634: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 4 16:27:19.634: INFO: pod-sharedvolume-86936850-dee1-46bf-8b03-52287eae813c started at 2021-05-04 16:22:19 +0000 UTC (0+2 container statuses recorded) May 4 16:27:19.634: INFO: Container busybox-main-container ready: false, restart count 0 May 4 16:27:19.634: INFO: Container busybox-sub-container ready: false, restart count 0 May 4 16:27:19.634: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.634: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 W0504 16:27:19.647562 24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:27:19.678: INFO: Latency metrics for node node1 May 4 16:27:19.678: INFO: Logging node info for node node2 May 4 16:27:19.680: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 41567fa7-bb24-4381-9387-e4115195037d 43259 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:98:f5:3b:98:5c"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:13 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:13 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:13 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:27:13 +0000 UTC,LastTransitionTime:2021-05-04 14:45:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d47a1c7ab17f44f2ae7ff788700a8d74,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:df3523a1-a74f-4f8b-beb1-29f5ed8699f3,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:27:19.681: INFO: Logging kubelet events for node node2 May 4 16:27:19.683: INFO: Logging pods the kubelet thinks is on node node2 May 4 16:27:19.696: INFO: pod-exec-websocket-2863438c-c2df-4c3c-9cd1-2b53e8002946 started at 2021-05-04 16:22:49 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.696: INFO: Container main ready: false, restart count 0 May 4 16:27:19.696: INFO: pod-service-account-55278b67-f302-4d69-b992-0113a6bbdd84 started at 2021-05-04 16:26:40 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.696: INFO: Container test ready: false, restart count 0 May 4 16:27:19.696: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.696: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 4 16:27:19.696: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.696: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:27:19.696: INFO: cmk-2fmbx started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:27:19.696: INFO: Container nodereport ready: true, restart count 0 May 4 16:27:19.696: INFO: Container reconcile ready: true, restart count 0 May 4 16:27:19.696: INFO: var-expansion-792ff743-a8c5-4f3a-94b8-4968bd4cf720 started at 2021-05-04 16:25:54 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.696: INFO: Container dapi-container ready: false, restart count 0 May 4 16:27:19.696: INFO: kube-flannel-lnwkk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:27:19.696: INFO: Init container install-cni ready: true, restart count 2 May 4 16:27:19.696: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:27:19.696: INFO: cmk-webhook-6c9d5f8578-fr595 started at 2021-05-04 14:55:15 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.696: INFO: Container cmk-webhook ready: true, restart count 0 May 4 16:27:19.696: INFO: ss-0 started at 2021-05-04 16:17:34 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.696: INFO: Container webserver ready: false, restart count 0 May 4 16:27:19.696: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x started at 2021-05-04 14:59:02 +0000 UTC (0+2 container statuses recorded) May 4 16:27:19.696: INFO: Container tas-controller ready: true, restart count 0 May 4 16:27:19.696: INFO: Container tas-extender ready: true, restart count 0 May 4 16:27:19.696: INFO: foo-9dkvq started at 2021-05-04 16:25:24 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.696: INFO: Container c ready: false, restart count 0 May 4 16:27:19.696: INFO: liveness-exec started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.696: INFO: Container liveness-exec ready: false, restart count 6 May 4 16:27:19.696: INFO: kube-proxy-rfjjf started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.696: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:27:19.696: INFO: node-feature-discovery-worker-jzjqs started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.696: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:27:19.696: INFO: node-exporter-5lghf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:27:19.696: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:27:19.696: INFO: Container node-exporter ready: true, restart count 0 May 4 16:27:19.696: INFO: kube-multus-ds-amd64-7r2s4 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.696: INFO: Container kube-multus ready: true, restart count 1 May 4 16:27:19.696: INFO: cmk-init-discover-node2-zlxzj started at 2021-05-04 14:54:52 +0000 UTC (0+3 container statuses recorded) May 4 16:27:19.696: INFO: Container discover ready: false, restart count 0 May 4 16:27:19.696: INFO: Container init ready: false, restart count 0 May 4 16:27:19.696: INFO: Container install ready: false, restart count 0 May 4 16:27:19.696: INFO: collectd-dhwfp started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:27:19.696: INFO: Container collectd ready: true, restart count 0 May 4 16:27:19.696: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:27:19.696: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:27:19.696: INFO: fail-once-local-bkr6m started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.696: INFO: Container c ready: false, restart count 0 May 4 16:27:19.696: INFO: foo-sxtvr started at 2021-05-04 16:25:24 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.696: INFO: Container c ready: false, restart count 0 May 4 16:27:19.696: INFO: nginx-proxy-node2 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:27:19.696: INFO: Container nginx-proxy ready: true, restart count 2 W0504 16:27:19.707637 24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:27:19.764: INFO: Latency metrics for node node2 May 4 16:27:19.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-424" for this suite. • Failure [300.370 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:27:19.445: Unexpected error: <*errors.errorString | 0xc0002c2200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:103 ------------------------------ {"msg":"FAILED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":21,"skipped":503,"failed":5,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","[k8s.io] Pods should be updated [NodeConformance] [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]"]} S ------------------------------ [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:27:19.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-a9b328e9-a9c9-4e3a-80b2-1e97869945cf [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:27:19.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1717" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":22,"skipped":504,"failed":5,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","[k8s.io] Pods should be updated [NodeConformance] [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]"]} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:26:30.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0504 16:26:31.830118 30 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:27:33.846: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:27:33.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1341" for this suite. • [SLOW TEST:63.584 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":35,"skipped":639,"failed":2,"failures":["[sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","[k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:27:19.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:27:35.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7859" for this suite. • [SLOW TEST:16.109 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":23,"skipped":510,"failed":5,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","[k8s.io] Pods should be updated [NodeConformance] [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]"]} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:27:35.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 4 16:27:35.992: INFO: Waiting up to 5m0s for pod "downwardapi-volume-478841fc-7909-441e-949e-56c003e78262" in namespace "projected-6607" to be "Succeeded or Failed" May 4 16:27:35.994: INFO: Pod "downwardapi-volume-478841fc-7909-441e-949e-56c003e78262": Phase="Pending", Reason="", readiness=false. Elapsed: 2.172492ms May 4 16:27:37.998: INFO: Pod "downwardapi-volume-478841fc-7909-441e-949e-56c003e78262": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005483039s May 4 16:27:40.001: INFO: Pod "downwardapi-volume-478841fc-7909-441e-949e-56c003e78262": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009217585s STEP: Saw pod success May 4 16:27:40.001: INFO: Pod "downwardapi-volume-478841fc-7909-441e-949e-56c003e78262" satisfied condition "Succeeded or Failed" May 4 16:27:40.004: INFO: Trying to get logs from node node1 pod downwardapi-volume-478841fc-7909-441e-949e-56c003e78262 container client-container: STEP: delete the pod May 4 16:27:40.040: INFO: Waiting for pod downwardapi-volume-478841fc-7909-441e-949e-56c003e78262 to disappear May 4 16:27:40.042: INFO: Pod downwardapi-volume-478841fc-7909-441e-949e-56c003e78262 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:27:40.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6607" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":522,"failed":5,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","[k8s.io] Pods should be updated [NodeConformance] [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]"]} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:27:40.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium May 4 16:27:40.103: INFO: Waiting up to 5m0s for pod "pod-3a212640-6210-4d6d-b0bb-375b19e5d1fb" in namespace "emptydir-3543" to be "Succeeded or Failed" May 4 16:27:40.108: INFO: Pod "pod-3a212640-6210-4d6d-b0bb-375b19e5d1fb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.84809ms May 4 16:27:42.112: INFO: Pod "pod-3a212640-6210-4d6d-b0bb-375b19e5d1fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008895615s May 4 16:27:44.115: INFO: Pod "pod-3a212640-6210-4d6d-b0bb-375b19e5d1fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012107044s STEP: Saw pod success May 4 16:27:44.115: INFO: Pod "pod-3a212640-6210-4d6d-b0bb-375b19e5d1fb" satisfied condition "Succeeded or Failed" May 4 16:27:44.117: INFO: Trying to get logs from node node1 pod pod-3a212640-6210-4d6d-b0bb-375b19e5d1fb container test-container: STEP: delete the pod May 4 16:27:44.133: INFO: Waiting for pod pod-3a212640-6210-4d6d-b0bb-375b19e5d1fb to disappear May 4 16:27:44.135: INFO: Pod pod-3a212640-6210-4d6d-b0bb-375b19e5d1fb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:27:44.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3543" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":534,"failed":5,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","[k8s.io] Pods should be updated [NodeConformance] [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:27:33.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 4 16:27:43.930: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7074 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 4 16:27:43.930: INFO: >>> kubeConfig: /root/.kube/config May 4 16:27:44.036: INFO: Exec stderr: "" May 4 16:27:44.036: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7074 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 4 16:27:44.036: INFO: >>> kubeConfig: /root/.kube/config May 4 16:27:44.143: INFO: Exec stderr: "" May 4 16:27:44.143: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7074 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 4 16:27:44.143: INFO: >>> kubeConfig: /root/.kube/config May 4 16:27:44.250: INFO: Exec stderr: "" May 4 16:27:44.250: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7074 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 4 16:27:44.250: INFO: >>> kubeConfig: /root/.kube/config May 4 16:27:44.353: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 4 16:27:44.353: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7074 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 4 16:27:44.353: INFO: >>> kubeConfig: /root/.kube/config May 4 16:27:44.475: INFO: Exec stderr: "" May 4 16:27:44.475: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7074 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 4 16:27:44.475: INFO: >>> kubeConfig: /root/.kube/config May 4 16:27:44.572: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 4 16:27:44.572: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7074 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 4 16:27:44.572: INFO: >>> kubeConfig: /root/.kube/config May 4 16:27:44.678: INFO: Exec stderr: "" May 4 16:27:44.678: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7074 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 4 16:27:44.678: INFO: >>> kubeConfig: /root/.kube/config May 4 16:27:44.783: INFO: Exec stderr: "" May 4 16:27:44.783: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7074 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 4 16:27:44.783: INFO: >>> kubeConfig: /root/.kube/config May 4 16:27:44.884: INFO: Exec stderr: "" May 4 16:27:44.884: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7074 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 4 16:27:44.884: INFO: >>> kubeConfig: /root/.kube/config May 4 16:27:45.023: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:27:45.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-7074" for this suite. • [SLOW TEST:11.165 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":643,"failed":2,"failures":["[sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","[k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:17:34.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-3657 [It] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-3657 May 4 16:17:34.658: INFO: Found 0 stateful pods, waiting for 1 May 4 16:17:44.664: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:17:54.664: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:18:04.665: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:18:14.663: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:18:24.663: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:18:34.661: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:18:44.662: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:18:54.665: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:19:04.662: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:19:14.662: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:19:24.662: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:19:34.664: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:19:44.667: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:19:54.665: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:20:04.662: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:20:14.663: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:20:24.664: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:20:34.666: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:20:44.664: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:20:54.665: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:21:04.663: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:21:14.662: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:21:24.661: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:21:34.663: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:21:44.662: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:21:54.662: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:22:04.662: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:22:14.664: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:22:24.667: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:22:34.664: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:22:44.662: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:22:54.663: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:23:04.663: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:23:14.664: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:23:24.662: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:23:34.664: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:23:44.668: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:23:54.665: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:24:04.664: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:24:14.663: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:24:24.665: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:24:34.665: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:24:44.664: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:24:54.663: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:25:04.663: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:25:14.662: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:25:24.662: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:25:34.663: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:25:44.662: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:25:54.661: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:26:04.667: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:26:14.664: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:26:24.663: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:26:34.661: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:26:44.662: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:26:54.663: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:27:04.663: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:27:14.662: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:27:24.661: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:27:34.663: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:27:34.666: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 4 16:27:34.666: FAIL: Failed waiting for pods to enter running: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning(0x54075e0, 0xc0039fc000, 0x100000001, 0xc001480a00) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:58 +0x10e k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.13() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:845 +0x2e9 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc003576d80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc003576d80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc003576d80, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 4 16:27:34.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3657 describe po ss-0' May 4 16:27:34.855: INFO: stderr: "" May 4 16:27:34.855: INFO: stdout: "Name: ss-0\nNamespace: statefulset-3657\nPriority: 0\nNode: node2/10.10.190.208\nStart Time: Tue, 04 May 2021 16:17:34 +0000\nLabels: baz=blah\n controller-revision-hash=ss-65c7964b94\n foo=bar\n statefulset.kubernetes.io/pod-name=ss-0\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.221\"\n ],\n \"mac\": \"b6:a5:ee:ba:63:98\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.221\"\n ],\n \"mac\": \"b6:a5:ee:ba:63:98\",\n \"default\": true,\n \"dns\": {}\n }]\n kubernetes.io/psp: collectd\nStatus: Pending\nIP: 10.244.3.221\nIPs:\n IP: 10.244.3.221\nControlled By: StatefulSet/ss\nContainers:\n webserver:\n Container ID: \n Image: docker.io/library/httpd:2.4.38-alpine\n Image ID: \n Port: \n Host Port: \n State: Waiting\n Reason: ImagePullBackOff\n Ready: False\n Restart Count: 0\n Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-ktwcr (ro)\nConditions:\n Type Status\n Initialized True \n Ready False \n ContainersReady False \n PodScheduled True \nVolumes:\n default-token-ktwcr:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-ktwcr\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 10m default-scheduler Successfully assigned statefulset-3657/ss-0 to node2\n Normal AddedInterface 9m58s multus Add eth0 [10.244.3.221/24]\n Normal Pulling 8m32s (x4 over 9m58s) kubelet Pulling image \"docker.io/library/httpd:2.4.38-alpine\"\n Warning Failed 8m31s (x4 over 9m57s) kubelet Failed to pull image \"docker.io/library/httpd:2.4.38-alpine\": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\n Warning Failed 8m31s (x4 over 9m57s) kubelet Error: ErrImagePull\n Normal BackOff 8m8s (x6 over 9m55s) kubelet Back-off pulling image \"docker.io/library/httpd:2.4.38-alpine\"\n Warning Failed 4m50s (x20 over 9m55s) kubelet Error: ImagePullBackOff\n" May 4 16:27:34.855: INFO: Output of kubectl describe ss-0: Name: ss-0 Namespace: statefulset-3657 Priority: 0 Node: node2/10.10.190.208 Start Time: Tue, 04 May 2021 16:17:34 +0000 Labels: baz=blah controller-revision-hash=ss-65c7964b94 foo=bar statefulset.kubernetes.io/pod-name=ss-0 Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.221" ], "mac": "b6:a5:ee:ba:63:98", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.221" ], "mac": "b6:a5:ee:ba:63:98", "default": true, "dns": {} }] kubernetes.io/psp: collectd Status: Pending IP: 10.244.3.221 IPs: IP: 10.244.3.221 Controlled By: StatefulSet/ss Containers: webserver: Container ID: Image: docker.io/library/httpd:2.4.38-alpine Image ID: Port: Host Port: State: Waiting Reason: ImagePullBackOff Ready: False Restart Count: 0 Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-ktwcr (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: default-token-ktwcr: Type: Secret (a volume populated by a Secret) SecretName: default-token-ktwcr Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 10m default-scheduler Successfully assigned statefulset-3657/ss-0 to node2 Normal AddedInterface 9m58s multus Add eth0 [10.244.3.221/24] Normal Pulling 8m32s (x4 over 9m58s) kubelet Pulling image "docker.io/library/httpd:2.4.38-alpine" Warning Failed 8m31s (x4 over 9m57s) kubelet Failed to pull image "docker.io/library/httpd:2.4.38-alpine": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Warning Failed 8m31s (x4 over 9m57s) kubelet Error: ErrImagePull Normal BackOff 8m8s (x6 over 9m55s) kubelet Back-off pulling image "docker.io/library/httpd:2.4.38-alpine" Warning Failed 4m50s (x20 over 9m55s) kubelet Error: ImagePullBackOff May 4 16:27:34.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3657 logs ss-0 --tail=100' May 4 16:27:35.004: INFO: rc: 1 May 4 16:27:35.004: INFO: Last 100 log lines of ss-0: May 4 16:27:35.004: INFO: Deleting all statefulset in ns statefulset-3657 May 4 16:27:35.007: INFO: Scaling statefulset ss to 0 May 4 16:27:45.019: INFO: Waiting for statefulset status.replicas updated to 0 May 4 16:27:45.021: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "statefulset-3657". STEP: Found 9 events. May 4 16:27:45.034: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for ss-0: { } Scheduled: Successfully assigned statefulset-3657/ss-0 to node2 May 4 16:27:45.034: INFO: At 2021-05-04 16:17:34 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-0 in StatefulSet ss successful May 4 16:27:45.034: INFO: At 2021-05-04 16:17:36 +0000 UTC - event for ss-0: {kubelet node2} Pulling: Pulling image "docker.io/library/httpd:2.4.38-alpine" May 4 16:27:45.034: INFO: At 2021-05-04 16:17:36 +0000 UTC - event for ss-0: {multus } AddedInterface: Add eth0 [10.244.3.221/24] May 4 16:27:45.034: INFO: At 2021-05-04 16:17:37 +0000 UTC - event for ss-0: {kubelet node2} Failed: Failed to pull image "docker.io/library/httpd:2.4.38-alpine": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 4 16:27:45.034: INFO: At 2021-05-04 16:17:37 +0000 UTC - event for ss-0: {kubelet node2} Failed: Error: ErrImagePull May 4 16:27:45.034: INFO: At 2021-05-04 16:17:39 +0000 UTC - event for ss-0: {kubelet node2} BackOff: Back-off pulling image "docker.io/library/httpd:2.4.38-alpine" May 4 16:27:45.034: INFO: At 2021-05-04 16:17:39 +0000 UTC - event for ss-0: {kubelet node2} Failed: Error: ImagePullBackOff May 4 16:27:45.034: INFO: At 2021-05-04 16:27:35 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-0 in StatefulSet ss successful May 4 16:27:45.036: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:27:45.036: INFO: May 4 16:27:45.040: INFO: Logging node info for node master1 May 4 16:27:45.043: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 db982204-549e-4532-90a7-a4410878cfc9 43516 0 2021-05-04 14:43:01 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"3e:f0:43:cb:66:52"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-04 14:51:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:46 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:40 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:40 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:40 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:27:40 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:88a0771919594d4187f6704fc7592bf8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8e0a253b-2aa4-4467-879e-567e7ba1ffa4,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:27:45.044: INFO: Logging kubelet events for node master1 May 4 16:27:45.046: INFO: Logging pods the kubelet thinks is on node master1 May 4 16:27:45.055: INFO: kube-apiserver-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.055: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:27:45.055: INFO: kube-controller-manager-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.055: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:27:45.055: INFO: kube-proxy-8j6ch started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.055: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:27:45.055: INFO: docker-registry-docker-registry-56cbc7bc58-zhf8t started at 2021-05-04 14:48:42 +0000 UTC (0+2 container statuses recorded) May 4 16:27:45.055: INFO: Container docker-registry ready: true, restart count 0 May 4 16:27:45.055: INFO: Container nginx ready: true, restart count 0 May 4 16:27:45.055: INFO: node-exporter-jckjs started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:27:45.055: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:27:45.055: INFO: Container node-exporter ready: true, restart count 0 May 4 16:27:45.055: INFO: kube-scheduler-master1 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.055: INFO: Container kube-scheduler ready: true, restart count 0 May 4 16:27:45.055: INFO: kube-flannel-qspzk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:27:45.055: INFO: Init container install-cni ready: true, restart count 0 May 4 16:27:45.055: INFO: Container kube-flannel ready: true, restart count 3 May 4 16:27:45.055: INFO: kube-multus-ds-amd64-jflvf started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.056: INFO: Container kube-multus ready: true, restart count 1 May 4 16:27:45.056: INFO: coredns-7677f9bb54-qvcd2 started at 2021-05-04 14:46:11 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.056: INFO: Container coredns ready: true, restart count 1 May 4 16:27:45.056: INFO: node-feature-discovery-controller-5bf5c49849-72rn6 started at 2021-05-04 14:51:52 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.056: INFO: Container nfd-controller ready: true, restart count 0 W0504 16:27:45.070851 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:27:45.103: INFO: Latency metrics for node master1 May 4 16:27:45.103: INFO: Logging node info for node master2 May 4 16:27:45.105: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 e2c15170-247b-4e7b-b818-abc807948bf8 43512 0 2021-05-04 14:43:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:e0:10:a0:e0:62"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:32 +0000 UTC,LastTransitionTime:2021-05-04 14:47:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:40 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:40 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:40 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:27:40 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6af568f56589422a9bd68e0270ce0f8c,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:bf27bb77-fad2-4b52-85c3-acb5113fc512,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:27:45.106: INFO: Logging kubelet events for node master2 May 4 16:27:45.108: INFO: Logging pods the kubelet thinks is on node master2 May 4 16:27:45.115: INFO: node-exporter-9c6qf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:27:45.115: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:27:45.115: INFO: Container node-exporter ready: true, restart count 0 May 4 16:27:45.115: INFO: kube-apiserver-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.115: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:27:45.115: INFO: kube-controller-manager-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.115: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:27:45.115: INFO: kube-scheduler-master2 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.115: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:27:45.115: INFO: kube-proxy-6b5t8 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.115: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:27:45.115: INFO: kube-flannel-cxdfr started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:27:45.115: INFO: Init container install-cni ready: true, restart count 0 May 4 16:27:45.115: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:27:45.115: INFO: kube-multus-ds-amd64-dw8tg started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.115: INFO: Container kube-multus ready: true, restart count 1 May 4 16:27:45.115: INFO: dns-autoscaler-5b7b5c9b6f-zbrsq started at 2021-05-04 14:46:08 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.115: INFO: Container autoscaler ready: true, restart count 1 W0504 16:27:45.129277 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:27:45.165: INFO: Latency metrics for node master2 May 4 16:27:45.165: INFO: Logging node info for node master3 May 4 16:27:45.167: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 b533a646-667e-403c-944b-71dec9cc4851 43511 0 2021-05-04 14:43:51 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:b0:53:14:f6:c9"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:48:50 +0000 UTC,LastTransitionTime:2021-05-04 14:48:50 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:40 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:40 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:40 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:27:40 +0000 UTC,LastTransitionTime:2021-05-04 14:45:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:959373dcda56494486f0c2bb0bb496cc,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:0714ca81-c21e-40d6-a288-48d597238e54,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:27:45.167: INFO: Logging kubelet events for node master3 May 4 16:27:45.169: INFO: Logging pods the kubelet thinks is on node master3 May 4 16:27:45.177: INFO: kube-flannel-wznt8 started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:27:45.177: INFO: Init container install-cni ready: true, restart count 0 May 4 16:27:45.177: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:27:45.177: INFO: kube-multus-ds-amd64-cgwz2 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.177: INFO: Container kube-multus ready: true, restart count 1 May 4 16:27:45.177: INFO: coredns-7677f9bb54-pshfb started at 2021-05-04 14:46:06 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.177: INFO: Container coredns ready: true, restart count 1 May 4 16:27:45.177: INFO: node-exporter-wvppn started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:27:45.177: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:27:45.177: INFO: Container node-exporter ready: true, restart count 0 May 4 16:27:45.177: INFO: kube-apiserver-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.177: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:27:45.177: INFO: kube-controller-manager-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.177: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:27:45.177: INFO: kube-scheduler-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.177: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:27:45.177: INFO: kube-proxy-2p5b6 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.177: INFO: Container kube-proxy ready: true, restart count 2 W0504 16:27:45.192676 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:27:45.217: INFO: Latency metrics for node master3 May 4 16:27:45.217: INFO: Logging node info for node node1 May 4 16:27:45.219: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 b8cf7e16-d5c7-4e2c-996a-93d93bd4fa1c 43445 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"e2:50:df:03:d2:13"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:54:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:36 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:36 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:36 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:27:36 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bffc023a4ab84df0b0181bc7b8f509e2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:dc08af72-abca-4f1d-bd0f-0e8d8eb97de5,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[@ :],SizeBytes:1002569035,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:371dc6bf7e0c7ce112a29341b000c40d840aef1dbb4fdcb3ae5c0597e28f3061 golang:alpine3.12],SizeBytes:301097267,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:27:45.220: INFO: Logging kubelet events for node node1 May 4 16:27:45.222: INFO: Logging pods the kubelet thinks is on node node1 May 4 16:27:45.236: INFO: nginx-proxy-node1 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.237: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:27:45.237: INFO: kube-proxy-t2mbn started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.237: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:27:45.237: INFO: liveness-http started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.237: INFO: Container liveness-http ready: true, restart count 20 May 4 16:27:45.237: INFO: cmk-init-discover-node1-m8vvw started at 2021-05-04 14:54:32 +0000 UTC (0+3 container statuses recorded) May 4 16:27:45.237: INFO: Container discover ready: false, restart count 0 May 4 16:27:45.237: INFO: Container init ready: false, restart count 0 May 4 16:27:45.237: INFO: Container install ready: false, restart count 0 May 4 16:27:45.237: INFO: var-expansion-98b73d79-7107-4138-a06e-af820041f2eb started at 2021-05-04 16:27:03 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.237: INFO: Container dapi-container ready: false, restart count 0 May 4 16:27:45.237: INFO: downwardapi-volume-01e4cadd-7f79-447f-9db0-6ab6575e0169 started at 2021-05-04 16:27:44 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.237: INFO: Container client-container ready: false, restart count 0 May 4 16:27:45.237: INFO: kube-multus-ds-amd64-pkmbz started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.237: INFO: Container kube-multus ready: true, restart count 1 May 4 16:27:45.237: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.237: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 16:27:45.237: INFO: cmk-slg76 started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:27:45.237: INFO: Container nodereport ready: true, restart count 0 May 4 16:27:45.237: INFO: Container reconcile ready: true, restart count 0 May 4 16:27:45.237: INFO: prometheus-k8s-0 started at 2021-05-04 14:56:12 +0000 UTC (0+5 container statuses recorded) May 4 16:27:45.237: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:27:45.237: INFO: Container grafana ready: true, restart count 0 May 4 16:27:45.237: INFO: Container prometheus ready: true, restart count 1 May 4 16:27:45.237: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:27:45.237: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 4 16:27:45.237: INFO: test-host-network-pod started at 2021-05-04 16:27:39 +0000 UTC (0+2 container statuses recorded) May 4 16:27:45.237: INFO: Container busybox-1 ready: true, restart count 0 May 4 16:27:45.237: INFO: Container busybox-2 ready: true, restart count 0 May 4 16:27:45.237: INFO: kube-flannel-d6pbl started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:27:45.237: INFO: Init container install-cni ready: true, restart count 2 May 4 16:27:45.237: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:27:45.237: INFO: node-feature-discovery-worker-wfgl5 started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.237: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:27:45.237: INFO: prometheus-operator-5bb8cb9d8f-rrrhf started at 2021-05-04 14:56:03 +0000 UTC (0+2 container statuses recorded) May 4 16:27:45.237: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:27:45.237: INFO: Container prometheus-operator ready: true, restart count 0 May 4 16:27:45.237: INFO: node-exporter-k8qd9 started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:27:45.237: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:27:45.237: INFO: Container node-exporter ready: true, restart count 0 May 4 16:27:45.237: INFO: collectd-4755t started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:27:45.237: INFO: Container collectd ready: true, restart count 0 May 4 16:27:45.237: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:27:45.237: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:27:45.237: INFO: fail-once-local-ltx4r started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.237: INFO: Container c ready: false, restart count 0 May 4 16:27:45.237: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.237: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:27:45.237: INFO: busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0 started at 2021-05-04 16:26:47 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.237: INFO: Container busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0 ready: false, restart count 0 W0504 16:27:45.247270 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:27:45.425: INFO: Latency metrics for node node1 May 4 16:27:45.425: INFO: Logging node info for node node2 May 4 16:27:45.427: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 41567fa7-bb24-4381-9387-e4115195037d 43552 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:98:f5:3b:98:5c"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:43 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:43 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:43 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:27:43 +0000 UTC,LastTransitionTime:2021-05-04 14:45:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d47a1c7ab17f44f2ae7ff788700a8d74,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:df3523a1-a74f-4f8b-beb1-29f5ed8699f3,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:27:45.428: INFO: Logging kubelet events for node node2 May 4 16:27:45.431: INFO: Logging pods the kubelet thinks is on node node2 May 4 16:27:45.442: INFO: foo-sxtvr started at 2021-05-04 16:25:24 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.442: INFO: Container c ready: false, restart count 0 May 4 16:27:45.442: INFO: test-pod started at 2021-05-04 16:27:33 +0000 UTC (0+3 container statuses recorded) May 4 16:27:45.442: INFO: Container busybox-1 ready: true, restart count 0 May 4 16:27:45.442: INFO: Container busybox-2 ready: true, restart count 0 May 4 16:27:45.442: INFO: Container busybox-3 ready: true, restart count 0 May 4 16:27:45.442: INFO: downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813 started at 2021-05-04 16:27:45 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.442: INFO: Container dapi-container ready: false, restart count 0 May 4 16:27:45.442: INFO: nginx-proxy-node2 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.442: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:27:45.442: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.442: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 4 16:27:45.442: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.442: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:27:45.442: INFO: cmk-2fmbx started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:27:45.442: INFO: Container nodereport ready: true, restart count 0 May 4 16:27:45.442: INFO: Container reconcile ready: true, restart count 0 May 4 16:27:45.442: INFO: pod-exec-websocket-2863438c-c2df-4c3c-9cd1-2b53e8002946 started at 2021-05-04 16:22:49 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.442: INFO: Container main ready: false, restart count 0 May 4 16:27:45.442: INFO: pod-service-account-55278b67-f302-4d69-b992-0113a6bbdd84 started at 2021-05-04 16:26:40 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.442: INFO: Container test ready: false, restart count 0 May 4 16:27:45.442: INFO: kube-flannel-lnwkk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:27:45.442: INFO: Init container install-cni ready: true, restart count 2 May 4 16:27:45.442: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:27:45.442: INFO: cmk-webhook-6c9d5f8578-fr595 started at 2021-05-04 14:55:15 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.442: INFO: Container cmk-webhook ready: true, restart count 0 May 4 16:27:45.442: INFO: var-expansion-792ff743-a8c5-4f3a-94b8-4968bd4cf720 started at 2021-05-04 16:25:54 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.442: INFO: Container dapi-container ready: false, restart count 0 May 4 16:27:45.442: INFO: kube-proxy-rfjjf started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.442: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:27:45.442: INFO: node-feature-discovery-worker-jzjqs started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.442: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:27:45.442: INFO: node-exporter-5lghf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:27:45.442: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:27:45.442: INFO: Container node-exporter ready: true, restart count 0 May 4 16:27:45.442: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x started at 2021-05-04 14:59:02 +0000 UTC (0+2 container statuses recorded) May 4 16:27:45.442: INFO: Container tas-controller ready: true, restart count 0 May 4 16:27:45.442: INFO: Container tas-extender ready: true, restart count 0 May 4 16:27:45.442: INFO: foo-9dkvq started at 2021-05-04 16:25:24 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.442: INFO: Container c ready: false, restart count 0 May 4 16:27:45.442: INFO: liveness-exec started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.442: INFO: Container liveness-exec ready: false, restart count 6 May 4 16:27:45.442: INFO: kube-multus-ds-amd64-7r2s4 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.442: INFO: Container kube-multus ready: true, restart count 1 May 4 16:27:45.442: INFO: cmk-init-discover-node2-zlxzj started at 2021-05-04 14:54:52 +0000 UTC (0+3 container statuses recorded) May 4 16:27:45.442: INFO: Container discover ready: false, restart count 0 May 4 16:27:45.442: INFO: Container init ready: false, restart count 0 May 4 16:27:45.442: INFO: Container install ready: false, restart count 0 May 4 16:27:45.442: INFO: collectd-dhwfp started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:27:45.442: INFO: Container collectd ready: true, restart count 0 May 4 16:27:45.442: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:27:45.442: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:27:45.442: INFO: fail-once-local-bkr6m started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:27:45.442: INFO: Container c ready: false, restart count 0 W0504 16:27:45.456865 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:27:45.508: INFO: Latency metrics for node node2 May 4 16:27:45.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3657" for this suite. • Failure [610.886 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:27:34.666: Failed waiting for pods to enter running: timed out waiting for the condition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:58 ------------------------------ {"msg":"FAILED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":19,"skipped":410,"failed":3,"failures":["[sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:27:45.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 4 16:27:45.558: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9801 /api/v1/namespaces/watch-9801/configmaps/e2e-watch-test-watch-closed de52e190-1e5e-4c03-ba96-11782bdd4e04 43598 0 2021-05-04 16:27:45 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-05-04 16:27:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 4 16:27:45.558: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9801 /api/v1/namespaces/watch-9801/configmaps/e2e-watch-test-watch-closed de52e190-1e5e-4c03-ba96-11782bdd4e04 43599 0 2021-05-04 16:27:45 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-05-04 16:27:45 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 4 16:27:45.569: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9801 /api/v1/namespaces/watch-9801/configmaps/e2e-watch-test-watch-closed de52e190-1e5e-4c03-ba96-11782bdd4e04 43600 0 2021-05-04 16:27:45 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-05-04 16:27:45 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 4 16:27:45.569: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9801 /api/v1/namespaces/watch-9801/configmaps/e2e-watch-test-watch-closed de52e190-1e5e-4c03-ba96-11782bdd4e04 43601 0 2021-05-04 16:27:45 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-05-04 16:27:45 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:27:45.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9801" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":20,"skipped":412,"failed":3,"failures":["[sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:27:44.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 4 16:27:44.219: INFO: Waiting up to 5m0s for pod "downwardapi-volume-01e4cadd-7f79-447f-9db0-6ab6575e0169" in namespace "projected-8321" to be "Succeeded or Failed" May 4 16:27:44.221: INFO: Pod "downwardapi-volume-01e4cadd-7f79-447f-9db0-6ab6575e0169": Phase="Pending", Reason="", readiness=false. Elapsed: 1.833762ms May 4 16:27:46.224: INFO: Pod "downwardapi-volume-01e4cadd-7f79-447f-9db0-6ab6575e0169": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004652656s May 4 16:27:48.227: INFO: Pod "downwardapi-volume-01e4cadd-7f79-447f-9db0-6ab6575e0169": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007640459s STEP: Saw pod success May 4 16:27:48.227: INFO: Pod "downwardapi-volume-01e4cadd-7f79-447f-9db0-6ab6575e0169" satisfied condition "Succeeded or Failed" May 4 16:27:48.230: INFO: Trying to get logs from node node1 pod downwardapi-volume-01e4cadd-7f79-447f-9db0-6ab6575e0169 container client-container: STEP: delete the pod May 4 16:27:48.244: INFO: Waiting for pod downwardapi-volume-01e4cadd-7f79-447f-9db0-6ab6575e0169 to disappear May 4 16:27:48.246: INFO: Pod downwardapi-volume-01e4cadd-7f79-447f-9db0-6ab6575e0169 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:27:48.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8321" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":560,"failed":5,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","[k8s.io] Pods should be updated [NodeConformance] [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:27:48.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:27:48.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-3255" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":27,"skipped":584,"failed":5,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","[k8s.io] Pods should be updated [NodeConformance] [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:27:45.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-ce572b8c-3361-43a9-906f-3000f7d23c0b STEP: Creating a pod to test consume secrets May 4 16:27:45.663: INFO: Waiting up to 5m0s for pod "pod-secrets-bfd520ea-f8ed-4c80-87d0-ee0649511158" in namespace "secrets-2249" to be "Succeeded or Failed" May 4 16:27:45.665: INFO: Pod "pod-secrets-bfd520ea-f8ed-4c80-87d0-ee0649511158": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127154ms May 4 16:27:47.669: INFO: Pod "pod-secrets-bfd520ea-f8ed-4c80-87d0-ee0649511158": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005239857s May 4 16:27:49.673: INFO: Pod "pod-secrets-bfd520ea-f8ed-4c80-87d0-ee0649511158": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009284626s STEP: Saw pod success May 4 16:27:49.673: INFO: Pod "pod-secrets-bfd520ea-f8ed-4c80-87d0-ee0649511158" satisfied condition "Succeeded or Failed" May 4 16:27:49.676: INFO: Trying to get logs from node node2 pod pod-secrets-bfd520ea-f8ed-4c80-87d0-ee0649511158 container secret-volume-test: STEP: delete the pod May 4 16:27:49.692: INFO: Waiting for pod pod-secrets-bfd520ea-f8ed-4c80-87d0-ee0649511158 to disappear May 4 16:27:49.694: INFO: Pod pod-secrets-bfd520ea-f8ed-4c80-87d0-ee0649511158 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:27:49.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2249" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":439,"failed":3,"failures":["[sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:22:49.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:22:49.900: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes May 4 16:27:49.928: FAIL: Unexpected error: <*errors.errorString | 0xc0002c4200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*PodClient).CreateSync(0xc003822000, 0xc00396c800, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:103 +0xfe k8s.io/kubernetes/test/e2e/common.glob..func18.7() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:560 +0x48c k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002965080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc002965080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc002965080, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "pods-9058". STEP: Found 7 events. May 4 16:27:49.932: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-exec-websocket-2863438c-c2df-4c3c-9cd1-2b53e8002946: { } Scheduled: Successfully assigned pods-9058/pod-exec-websocket-2863438c-c2df-4c3c-9cd1-2b53e8002946 to node2 May 4 16:27:49.932: INFO: At 2021-05-04 16:22:51 +0000 UTC - event for pod-exec-websocket-2863438c-c2df-4c3c-9cd1-2b53e8002946: {multus } AddedInterface: Add eth0 [10.244.3.250/24] May 4 16:27:49.932: INFO: At 2021-05-04 16:22:51 +0000 UTC - event for pod-exec-websocket-2863438c-c2df-4c3c-9cd1-2b53e8002946: {kubelet node2} Pulling: Pulling image "docker.io/library/busybox:1.29" May 4 16:27:49.932: INFO: At 2021-05-04 16:22:52 +0000 UTC - event for pod-exec-websocket-2863438c-c2df-4c3c-9cd1-2b53e8002946: {kubelet node2} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 4 16:27:49.932: INFO: At 2021-05-04 16:22:52 +0000 UTC - event for pod-exec-websocket-2863438c-c2df-4c3c-9cd1-2b53e8002946: {kubelet node2} Failed: Error: ErrImagePull May 4 16:27:49.932: INFO: At 2021-05-04 16:22:52 +0000 UTC - event for pod-exec-websocket-2863438c-c2df-4c3c-9cd1-2b53e8002946: {kubelet node2} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 4 16:27:49.932: INFO: At 2021-05-04 16:22:52 +0000 UTC - event for pod-exec-websocket-2863438c-c2df-4c3c-9cd1-2b53e8002946: {kubelet node2} Failed: Error: ImagePullBackOff May 4 16:27:49.935: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:27:49.935: INFO: pod-exec-websocket-2863438c-c2df-4c3c-9cd1-2b53e8002946 node2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:22:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:22:49 +0000 UTC ContainersNotReady containers with unready status: [main]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:22:49 +0000 UTC ContainersNotReady containers with unready status: [main]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:22:49 +0000 UTC }] May 4 16:27:49.935: INFO: May 4 16:27:49.939: INFO: Logging node info for node master1 May 4 16:27:49.942: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 db982204-549e-4532-90a7-a4410878cfc9 43516 0 2021-05-04 14:43:01 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"3e:f0:43:cb:66:52"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-04 14:51:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:46 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:40 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:40 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:40 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:27:40 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:88a0771919594d4187f6704fc7592bf8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8e0a253b-2aa4-4467-879e-567e7ba1ffa4,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:27:49.942: INFO: Logging kubelet events for node master1 May 4 16:27:49.944: INFO: Logging pods the kubelet thinks is on node master1 May 4 16:27:49.954: INFO: node-feature-discovery-controller-5bf5c49849-72rn6 started at 2021-05-04 14:51:52 +0000 UTC (0+1 container statuses recorded) May 4 16:27:49.954: INFO: Container nfd-controller ready: true, restart count 0 May 4 16:27:49.954: INFO: kube-flannel-qspzk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:27:49.954: INFO: Init container install-cni ready: true, restart count 0 May 4 16:27:49.954: INFO: Container kube-flannel ready: true, restart count 3 May 4 16:27:49.954: INFO: kube-multus-ds-amd64-jflvf started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:27:49.954: INFO: Container kube-multus ready: true, restart count 1 May 4 16:27:49.954: INFO: coredns-7677f9bb54-qvcd2 started at 2021-05-04 14:46:11 +0000 UTC (0+1 container statuses recorded) May 4 16:27:49.954: INFO: Container coredns ready: true, restart count 1 May 4 16:27:49.954: INFO: node-exporter-jckjs started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:27:49.954: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:27:49.954: INFO: Container node-exporter ready: true, restart count 0 May 4 16:27:49.954: INFO: kube-scheduler-master1 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:27:49.954: INFO: Container kube-scheduler ready: true, restart count 0 May 4 16:27:49.954: INFO: kube-apiserver-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:27:49.954: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:27:49.954: INFO: kube-controller-manager-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:27:49.954: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:27:49.954: INFO: kube-proxy-8j6ch started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:27:49.954: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:27:49.954: INFO: docker-registry-docker-registry-56cbc7bc58-zhf8t started at 2021-05-04 14:48:42 +0000 UTC (0+2 container statuses recorded) May 4 16:27:49.954: INFO: Container docker-registry ready: true, restart count 0 May 4 16:27:49.954: INFO: Container nginx ready: true, restart count 0 W0504 16:27:49.967045 21 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:27:49.993: INFO: Latency metrics for node master1 May 4 16:27:49.993: INFO: Logging node info for node master2 May 4 16:27:49.995: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 e2c15170-247b-4e7b-b818-abc807948bf8 43512 0 2021-05-04 14:43:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:e0:10:a0:e0:62"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:32 +0000 UTC,LastTransitionTime:2021-05-04 14:47:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:40 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:40 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:40 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:27:40 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6af568f56589422a9bd68e0270ce0f8c,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:bf27bb77-fad2-4b52-85c3-acb5113fc512,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:27:49.996: INFO: Logging kubelet events for node master2 May 4 16:27:49.999: INFO: Logging pods the kubelet thinks is on node master2 May 4 16:27:50.007: INFO: dns-autoscaler-5b7b5c9b6f-zbrsq started at 2021-05-04 14:46:08 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.007: INFO: Container autoscaler ready: true, restart count 1 May 4 16:27:50.007: INFO: node-exporter-9c6qf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:27:50.007: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:27:50.007: INFO: Container node-exporter ready: true, restart count 0 May 4 16:27:50.007: INFO: kube-apiserver-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.007: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:27:50.007: INFO: kube-controller-manager-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.007: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:27:50.007: INFO: kube-scheduler-master2 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.007: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:27:50.007: INFO: kube-proxy-6b5t8 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.007: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:27:50.007: INFO: kube-flannel-cxdfr started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:27:50.007: INFO: Init container install-cni ready: true, restart count 0 May 4 16:27:50.007: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:27:50.007: INFO: kube-multus-ds-amd64-dw8tg started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.007: INFO: Container kube-multus ready: true, restart count 1 W0504 16:27:50.018856 21 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:27:50.043: INFO: Latency metrics for node master2 May 4 16:27:50.043: INFO: Logging node info for node master3 May 4 16:27:50.045: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 b533a646-667e-403c-944b-71dec9cc4851 43511 0 2021-05-04 14:43:51 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:b0:53:14:f6:c9"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:48:50 +0000 UTC,LastTransitionTime:2021-05-04 14:48:50 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:40 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:40 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:40 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:27:40 +0000 UTC,LastTransitionTime:2021-05-04 14:45:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:959373dcda56494486f0c2bb0bb496cc,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:0714ca81-c21e-40d6-a288-48d597238e54,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:27:50.045: INFO: Logging kubelet events for node master3 May 4 16:27:50.048: INFO: Logging pods the kubelet thinks is on node master3 May 4 16:27:50.056: INFO: kube-flannel-wznt8 started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:27:50.056: INFO: Init container install-cni ready: true, restart count 0 May 4 16:27:50.056: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:27:50.056: INFO: kube-multus-ds-amd64-cgwz2 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.056: INFO: Container kube-multus ready: true, restart count 1 May 4 16:27:50.056: INFO: coredns-7677f9bb54-pshfb started at 2021-05-04 14:46:06 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.056: INFO: Container coredns ready: true, restart count 1 May 4 16:27:50.056: INFO: node-exporter-wvppn started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:27:50.056: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:27:50.056: INFO: Container node-exporter ready: true, restart count 0 May 4 16:27:50.056: INFO: kube-apiserver-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.056: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:27:50.056: INFO: kube-controller-manager-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.056: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:27:50.056: INFO: kube-scheduler-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.056: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:27:50.056: INFO: kube-proxy-2p5b6 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.056: INFO: Container kube-proxy ready: true, restart count 2 W0504 16:27:50.070218 21 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:27:50.093: INFO: Latency metrics for node master3 May 4 16:27:50.093: INFO: Logging node info for node node1 May 4 16:27:50.096: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 b8cf7e16-d5c7-4e2c-996a-93d93bd4fa1c 43620 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"e2:50:df:03:d2:13"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:54:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:46 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:46 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:46 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:27:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bffc023a4ab84df0b0181bc7b8f509e2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:dc08af72-abca-4f1d-bd0f-0e8d8eb97de5,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[@ :],SizeBytes:1002569035,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:371dc6bf7e0c7ce112a29341b000c40d840aef1dbb4fdcb3ae5c0597e28f3061 golang:alpine3.12],SizeBytes:301097267,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:27:50.097: INFO: Logging kubelet events for node node1 May 4 16:27:50.098: INFO: Logging pods the kubelet thinks is on node node1 May 4 16:27:50.117: INFO: simpletest-rc-to-be-deleted-ktdvd started at 2021-05-04 16:27:48 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.117: INFO: Container nginx ready: false, restart count 0 May 4 16:27:50.117: INFO: kube-multus-ds-amd64-pkmbz started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.117: INFO: Container kube-multus ready: true, restart count 1 May 4 16:27:50.117: INFO: cmk-slg76 started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:27:50.117: INFO: Container nodereport ready: true, restart count 0 May 4 16:27:50.117: INFO: Container reconcile ready: true, restart count 0 May 4 16:27:50.117: INFO: prometheus-k8s-0 started at 2021-05-04 14:56:12 +0000 UTC (0+5 container statuses recorded) May 4 16:27:50.117: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:27:50.117: INFO: Container grafana ready: true, restart count 0 May 4 16:27:50.117: INFO: Container prometheus ready: true, restart count 1 May 4 16:27:50.117: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:27:50.117: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 4 16:27:50.117: INFO: simpletest-rc-to-be-deleted-fqp76 started at 2021-05-04 16:27:48 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.117: INFO: Container nginx ready: false, restart count 0 May 4 16:27:50.117: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.117: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 16:27:50.117: INFO: node-feature-discovery-worker-wfgl5 started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.117: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:27:50.117: INFO: prometheus-operator-5bb8cb9d8f-rrrhf started at 2021-05-04 14:56:03 +0000 UTC (0+2 container statuses recorded) May 4 16:27:50.117: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:27:50.117: INFO: Container prometheus-operator ready: true, restart count 0 May 4 16:27:50.117: INFO: node-exporter-k8qd9 started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:27:50.117: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:27:50.117: INFO: Container node-exporter ready: true, restart count 0 May 4 16:27:50.117: INFO: test-host-network-pod started at 2021-05-04 16:27:39 +0000 UTC (0+2 container statuses recorded) May 4 16:27:50.117: INFO: Container busybox-1 ready: true, restart count 0 May 4 16:27:50.117: INFO: Container busybox-2 ready: true, restart count 0 May 4 16:27:50.117: INFO: simpletest-rc-to-be-deleted-q4bv2 started at 2021-05-04 16:27:48 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.117: INFO: Container nginx ready: false, restart count 0 May 4 16:27:50.117: INFO: kube-flannel-d6pbl started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:27:50.117: INFO: Init container install-cni ready: true, restart count 2 May 4 16:27:50.117: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:27:50.117: INFO: fail-once-local-ltx4r started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.118: INFO: Container c ready: false, restart count 0 May 4 16:27:50.118: INFO: simpletest-rc-to-be-deleted-bv44b started at 2021-05-04 16:27:48 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.118: INFO: Container nginx ready: false, restart count 0 May 4 16:27:50.118: INFO: collectd-4755t started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:27:50.118: INFO: Container collectd ready: true, restart count 0 May 4 16:27:50.118: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:27:50.118: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:27:50.118: INFO: busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0 started at 2021-05-04 16:26:47 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.118: INFO: Container busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0 ready: false, restart count 0 May 4 16:27:50.118: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.118: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:27:50.118: INFO: kube-proxy-t2mbn started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.118: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:27:50.118: INFO: liveness-http started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.118: INFO: Container liveness-http ready: true, restart count 20 May 4 16:27:50.118: INFO: simpletest-rc-to-be-deleted-zwwgg started at 2021-05-04 16:27:48 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.118: INFO: Container nginx ready: false, restart count 0 May 4 16:27:50.118: INFO: dns-test-06270070-c356-4d7e-830e-8ea00f5fb735 started at 2021-05-04 16:27:49 +0000 UTC (0+3 container statuses recorded) May 4 16:27:50.118: INFO: Container jessie-querier ready: false, restart count 0 May 4 16:27:50.118: INFO: Container querier ready: false, restart count 0 May 4 16:27:50.118: INFO: Container webserver ready: false, restart count 0 May 4 16:27:50.118: INFO: nginx-proxy-node1 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.118: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:27:50.118: INFO: var-expansion-98b73d79-7107-4138-a06e-af820041f2eb started at 2021-05-04 16:27:03 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.118: INFO: Container dapi-container ready: false, restart count 0 May 4 16:27:50.118: INFO: cmk-init-discover-node1-m8vvw started at 2021-05-04 14:54:32 +0000 UTC (0+3 container statuses recorded) May 4 16:27:50.118: INFO: Container discover ready: false, restart count 0 May 4 16:27:50.118: INFO: Container init ready: false, restart count 0 May 4 16:27:50.118: INFO: Container install ready: false, restart count 0 W0504 16:27:50.131014 21 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:27:50.426: INFO: Latency metrics for node node1 May 4 16:27:50.426: INFO: Logging node info for node node2 May 4 16:27:50.429: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 41567fa7-bb24-4381-9387-e4115195037d 43552 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:98:f5:3b:98:5c"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:43 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:43 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:43 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:27:43 +0000 UTC,LastTransitionTime:2021-05-04 14:45:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d47a1c7ab17f44f2ae7ff788700a8d74,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:df3523a1-a74f-4f8b-beb1-29f5ed8699f3,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:27:50.430: INFO: Logging kubelet events for node node2 May 4 16:27:50.431: INFO: Logging pods the kubelet thinks is on node node2 May 4 16:27:50.448: INFO: kube-proxy-rfjjf started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.448: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:27:50.448: INFO: node-feature-discovery-worker-jzjqs started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.448: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:27:50.448: INFO: node-exporter-5lghf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:27:50.448: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:27:50.448: INFO: Container node-exporter ready: true, restart count 0 May 4 16:27:50.448: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x started at 2021-05-04 14:59:02 +0000 UTC (0+2 container statuses recorded) May 4 16:27:50.448: INFO: Container tas-controller ready: true, restart count 0 May 4 16:27:50.448: INFO: Container tas-extender ready: true, restart count 0 May 4 16:27:50.448: INFO: foo-9dkvq started at 2021-05-04 16:25:24 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.448: INFO: Container c ready: false, restart count 0 May 4 16:27:50.448: INFO: liveness-exec started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.448: INFO: Container liveness-exec ready: false, restart count 6 May 4 16:27:50.448: INFO: kube-multus-ds-amd64-7r2s4 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.448: INFO: Container kube-multus ready: true, restart count 1 May 4 16:27:50.448: INFO: cmk-init-discover-node2-zlxzj started at 2021-05-04 14:54:52 +0000 UTC (0+3 container statuses recorded) May 4 16:27:50.448: INFO: Container discover ready: false, restart count 0 May 4 16:27:50.448: INFO: Container init ready: false, restart count 0 May 4 16:27:50.448: INFO: Container install ready: false, restart count 0 May 4 16:27:50.448: INFO: collectd-dhwfp started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:27:50.448: INFO: Container collectd ready: true, restart count 0 May 4 16:27:50.448: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:27:50.448: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:27:50.448: INFO: fail-once-local-bkr6m started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.448: INFO: Container c ready: false, restart count 0 May 4 16:27:50.448: INFO: simpletest-rc-to-be-deleted-wktpp started at 2021-05-04 16:27:48 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.448: INFO: Container nginx ready: false, restart count 0 May 4 16:27:50.448: INFO: simpletest-rc-to-be-deleted-ngsgh started at 2021-05-04 16:27:48 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.448: INFO: Container nginx ready: false, restart count 0 May 4 16:27:50.448: INFO: foo-sxtvr started at 2021-05-04 16:25:24 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.448: INFO: Container c ready: false, restart count 0 May 4 16:27:50.448: INFO: test-pod started at 2021-05-04 16:27:33 +0000 UTC (0+3 container statuses recorded) May 4 16:27:50.448: INFO: Container busybox-1 ready: true, restart count 0 May 4 16:27:50.448: INFO: Container busybox-2 ready: true, restart count 0 May 4 16:27:50.448: INFO: Container busybox-3 ready: true, restart count 0 May 4 16:27:50.448: INFO: downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813 started at 2021-05-04 16:27:45 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.448: INFO: Container dapi-container ready: false, restart count 0 May 4 16:27:50.448: INFO: simpletest-rc-to-be-deleted-l674c started at 2021-05-04 16:27:48 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.448: INFO: Container nginx ready: false, restart count 0 May 4 16:27:50.448: INFO: simpletest-rc-to-be-deleted-bxkqd started at 2021-05-04 16:27:48 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.448: INFO: Container nginx ready: false, restart count 0 May 4 16:27:50.448: INFO: nginx-proxy-node2 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.449: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:27:50.449: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.449: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 4 16:27:50.449: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.449: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:27:50.449: INFO: cmk-2fmbx started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:27:50.449: INFO: Container nodereport ready: true, restart count 0 May 4 16:27:50.449: INFO: Container reconcile ready: true, restart count 0 May 4 16:27:50.449: INFO: pod-exec-websocket-2863438c-c2df-4c3c-9cd1-2b53e8002946 started at 2021-05-04 16:22:49 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.449: INFO: Container main ready: false, restart count 0 May 4 16:27:50.449: INFO: pod-service-account-55278b67-f302-4d69-b992-0113a6bbdd84 started at 2021-05-04 16:26:40 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.449: INFO: Container test ready: false, restart count 0 May 4 16:27:50.449: INFO: simpletest-rc-to-be-deleted-tz4c6 started at 2021-05-04 16:27:48 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.449: INFO: Container nginx ready: false, restart count 0 May 4 16:27:50.449: INFO: kube-flannel-lnwkk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:27:50.449: INFO: Init container install-cni ready: true, restart count 2 May 4 16:27:50.449: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:27:50.449: INFO: cmk-webhook-6c9d5f8578-fr595 started at 2021-05-04 14:55:15 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.449: INFO: Container cmk-webhook ready: true, restart count 0 May 4 16:27:50.449: INFO: var-expansion-792ff743-a8c5-4f3a-94b8-4968bd4cf720 started at 2021-05-04 16:25:54 +0000 UTC (0+1 container statuses recorded) May 4 16:27:50.449: INFO: Container dapi-container ready: false, restart count 0 W0504 16:27:50.462192 21 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:27:51.174: INFO: Latency metrics for node node2 May 4 16:27:51.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9058" for this suite. • Failure [301.306 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should support remote command execution over websockets [NodeConformance] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:27:49.928: Unexpected error: <*errors.errorString | 0xc0002c4200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:103 ------------------------------ {"msg":"FAILED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":439,"failed":3,"failures":["[sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","[k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","[k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:25:54.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running May 4 16:27:54.342: FAIL: while waiting for pod to be running Unexpected error: <*errors.errorString | 0xc0002801f0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/common.glob..func9.8() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:331 +0x4ad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001827080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc001827080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc001827080, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "var-expansion-9763". STEP: Found 9 events. May 4 16:27:54.347: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for var-expansion-792ff743-a8c5-4f3a-94b8-4968bd4cf720: { } Scheduled: Successfully assigned var-expansion-9763/var-expansion-792ff743-a8c5-4f3a-94b8-4968bd4cf720 to node2 May 4 16:27:54.347: INFO: At 2021-05-04 16:25:55 +0000 UTC - event for var-expansion-792ff743-a8c5-4f3a-94b8-4968bd4cf720: {multus } AddedInterface: Add eth0 [10.244.3.253/24] May 4 16:27:54.347: INFO: At 2021-05-04 16:25:55 +0000 UTC - event for var-expansion-792ff743-a8c5-4f3a-94b8-4968bd4cf720: {kubelet node2} Pulling: Pulling image "docker.io/library/busybox:1.29" May 4 16:27:54.347: INFO: At 2021-05-04 16:25:56 +0000 UTC - event for var-expansion-792ff743-a8c5-4f3a-94b8-4968bd4cf720: {kubelet node2} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 4 16:27:54.347: INFO: At 2021-05-04 16:25:56 +0000 UTC - event for var-expansion-792ff743-a8c5-4f3a-94b8-4968bd4cf720: {kubelet node2} Failed: Error: ErrImagePull May 4 16:27:54.347: INFO: At 2021-05-04 16:25:57 +0000 UTC - event for var-expansion-792ff743-a8c5-4f3a-94b8-4968bd4cf720: {kubelet node2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 4 16:27:54.347: INFO: At 2021-05-04 16:25:59 +0000 UTC - event for var-expansion-792ff743-a8c5-4f3a-94b8-4968bd4cf720: {multus } AddedInterface: Add eth0 [10.244.3.254/24] May 4 16:27:54.347: INFO: At 2021-05-04 16:25:59 +0000 UTC - event for var-expansion-792ff743-a8c5-4f3a-94b8-4968bd4cf720: {kubelet node2} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 4 16:27:54.347: INFO: At 2021-05-04 16:25:59 +0000 UTC - event for var-expansion-792ff743-a8c5-4f3a-94b8-4968bd4cf720: {kubelet node2} Failed: Error: ImagePullBackOff May 4 16:27:54.349: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:27:54.349: INFO: var-expansion-792ff743-a8c5-4f3a-94b8-4968bd4cf720 node2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:25:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:25:54 +0000 UTC ContainersNotReady containers with unready status: [dapi-container]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:25:54 +0000 UTC ContainersNotReady containers with unready status: [dapi-container]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:25:54 +0000 UTC }] May 4 16:27:54.349: INFO: May 4 16:27:54.353: INFO: Logging node info for node master1 May 4 16:27:54.355: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 db982204-549e-4532-90a7-a4410878cfc9 43794 0 2021-05-04 14:43:01 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"3e:f0:43:cb:66:52"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-04 14:51:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:46 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:50 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:50 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:50 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:27:50 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:88a0771919594d4187f6704fc7592bf8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8e0a253b-2aa4-4467-879e-567e7ba1ffa4,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:27:54.356: INFO: Logging kubelet events for node master1 May 4 16:27:54.358: INFO: Logging pods the kubelet thinks is on node master1 May 4 16:27:54.367: INFO: node-feature-discovery-controller-5bf5c49849-72rn6 started at 2021-05-04 14:51:52 +0000 UTC (0+1 container statuses recorded) May 4 16:27:54.367: INFO: Container nfd-controller ready: true, restart count 0 May 4 16:27:54.367: INFO: kube-flannel-qspzk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:27:54.367: INFO: Init container install-cni ready: true, restart count 0 May 4 16:27:54.367: INFO: Container kube-flannel ready: true, restart count 3 May 4 16:27:54.367: INFO: kube-multus-ds-amd64-jflvf started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:27:54.367: INFO: Container kube-multus ready: true, restart count 1 May 4 16:27:54.367: INFO: coredns-7677f9bb54-qvcd2 started at 2021-05-04 14:46:11 +0000 UTC (0+1 container statuses recorded) May 4 16:27:54.367: INFO: Container coredns ready: true, restart count 1 May 4 16:27:54.367: INFO: node-exporter-jckjs started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:27:54.367: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:27:54.367: INFO: Container node-exporter ready: true, restart count 0 May 4 16:27:54.368: INFO: kube-scheduler-master1 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:27:54.368: INFO: Container kube-scheduler ready: true, restart count 0 May 4 16:27:54.368: INFO: kube-apiserver-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:27:54.368: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:27:54.368: INFO: kube-controller-manager-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:27:54.368: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:27:54.368: INFO: kube-proxy-8j6ch started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:27:54.368: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:27:54.368: INFO: docker-registry-docker-registry-56cbc7bc58-zhf8t started at 2021-05-04 14:48:42 +0000 UTC (0+2 container statuses recorded) May 4 16:27:54.368: INFO: Container docker-registry ready: true, restart count 0 May 4 16:27:54.368: INFO: Container nginx ready: true, restart count 0 W0504 16:27:54.379807 38 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:27:55.072: INFO: Latency metrics for node master1 May 4 16:27:55.072: INFO: Logging node info for node master2 May 4 16:27:55.074: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 e2c15170-247b-4e7b-b818-abc807948bf8 43769 0 2021-05-04 14:43:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:e0:10:a0:e0:62"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:32 +0000 UTC,LastTransitionTime:2021-05-04 14:47:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:50 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:50 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:50 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:27:50 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6af568f56589422a9bd68e0270ce0f8c,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:bf27bb77-fad2-4b52-85c3-acb5113fc512,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:27:55.074: INFO: Logging kubelet events for node master2 May 4 16:27:55.076: INFO: Logging pods the kubelet thinks is on node master2 May 4 16:27:55.084: INFO: kube-flannel-cxdfr started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:27:55.084: INFO: Init container install-cni ready: true, restart count 0 May 4 16:27:55.084: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:27:55.084: INFO: kube-multus-ds-amd64-dw8tg started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.084: INFO: Container kube-multus ready: true, restart count 1 May 4 16:27:55.084: INFO: dns-autoscaler-5b7b5c9b6f-zbrsq started at 2021-05-04 14:46:08 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.084: INFO: Container autoscaler ready: true, restart count 1 May 4 16:27:55.084: INFO: node-exporter-9c6qf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:27:55.084: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:27:55.084: INFO: Container node-exporter ready: true, restart count 0 May 4 16:27:55.084: INFO: kube-apiserver-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.084: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:27:55.084: INFO: kube-controller-manager-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.084: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:27:55.084: INFO: kube-scheduler-master2 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.084: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:27:55.084: INFO: kube-proxy-6b5t8 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.084: INFO: Container kube-proxy ready: true, restart count 2 W0504 16:27:55.096052 38 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:27:55.125: INFO: Latency metrics for node master2 May 4 16:27:55.125: INFO: Logging node info for node master3 May 4 16:27:55.128: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 b533a646-667e-403c-944b-71dec9cc4851 43768 0 2021-05-04 14:43:51 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:b0:53:14:f6:c9"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:48:50 +0000 UTC,LastTransitionTime:2021-05-04 14:48:50 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:50 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:50 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:50 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:27:50 +0000 UTC,LastTransitionTime:2021-05-04 14:45:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:959373dcda56494486f0c2bb0bb496cc,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:0714ca81-c21e-40d6-a288-48d597238e54,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:27:55.128: INFO: Logging kubelet events for node master3 May 4 16:27:55.131: INFO: Logging pods the kubelet thinks is on node master3 May 4 16:27:55.139: INFO: kube-apiserver-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.139: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:27:55.139: INFO: kube-controller-manager-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.139: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:27:55.139: INFO: kube-scheduler-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.139: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:27:55.139: INFO: kube-proxy-2p5b6 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.139: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:27:55.139: INFO: kube-flannel-wznt8 started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:27:55.139: INFO: Init container install-cni ready: true, restart count 0 May 4 16:27:55.139: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:27:55.139: INFO: kube-multus-ds-amd64-cgwz2 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.139: INFO: Container kube-multus ready: true, restart count 1 May 4 16:27:55.139: INFO: coredns-7677f9bb54-pshfb started at 2021-05-04 14:46:06 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.139: INFO: Container coredns ready: true, restart count 1 May 4 16:27:55.139: INFO: node-exporter-wvppn started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:27:55.139: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:27:55.139: INFO: Container node-exporter ready: true, restart count 0 W0504 16:27:55.153083 38 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:27:55.183: INFO: Latency metrics for node master3 May 4 16:27:55.183: INFO: Logging node info for node node1 May 4 16:27:55.186: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 b8cf7e16-d5c7-4e2c-996a-93d93bd4fa1c 43620 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"e2:50:df:03:d2:13"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:54:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:46 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:46 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:46 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:27:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bffc023a4ab84df0b0181bc7b8f509e2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:dc08af72-abca-4f1d-bd0f-0e8d8eb97de5,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[@ :],SizeBytes:1002569035,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:371dc6bf7e0c7ce112a29341b000c40d840aef1dbb4fdcb3ae5c0597e28f3061 golang:alpine3.12],SizeBytes:301097267,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:27:55.186: INFO: Logging kubelet events for node node1 May 4 16:27:55.189: INFO: Logging pods the kubelet thinks is on node node1 May 4 16:27:55.206: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.207: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:27:55.207: INFO: busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0 started at 2021-05-04 16:26:47 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.207: INFO: Container busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0 ready: false, restart count 0 May 4 16:27:55.207: INFO: liveness-http started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.207: INFO: Container liveness-http ready: true, restart count 21 May 4 16:27:55.207: INFO: dns-test-06270070-c356-4d7e-830e-8ea00f5fb735 started at 2021-05-04 16:27:49 +0000 UTC (0+3 container statuses recorded) May 4 16:27:55.207: INFO: Container jessie-querier ready: false, restart count 0 May 4 16:27:55.207: INFO: Container querier ready: false, restart count 0 May 4 16:27:55.207: INFO: Container webserver ready: false, restart count 0 May 4 16:27:55.207: INFO: nginx-proxy-node1 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.207: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:27:55.207: INFO: kube-proxy-t2mbn started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.207: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:27:55.207: INFO: pod-projected-secrets-b8a5b5e3-4702-4eb0-b5de-e380759f490c started at 2021-05-04 16:27:51 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.207: INFO: Container projected-secret-volume-test ready: false, restart count 0 May 4 16:27:55.207: INFO: cmk-init-discover-node1-m8vvw started at 2021-05-04 14:54:32 +0000 UTC (0+3 container statuses recorded) May 4 16:27:55.207: INFO: Container discover ready: false, restart count 0 May 4 16:27:55.207: INFO: Container init ready: false, restart count 0 May 4 16:27:55.207: INFO: Container install ready: false, restart count 0 May 4 16:27:55.207: INFO: var-expansion-98b73d79-7107-4138-a06e-af820041f2eb started at 2021-05-04 16:27:03 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.207: INFO: Container dapi-container ready: false, restart count 0 May 4 16:27:55.207: INFO: kube-multus-ds-amd64-pkmbz started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.207: INFO: Container kube-multus ready: true, restart count 1 May 4 16:27:55.207: INFO: simpletest-rc-to-be-deleted-ktdvd started at 2021-05-04 16:27:48 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.207: INFO: Container nginx ready: false, restart count 0 May 4 16:27:55.207: INFO: prometheus-k8s-0 started at 2021-05-04 14:56:12 +0000 UTC (0+5 container statuses recorded) May 4 16:27:55.207: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:27:55.207: INFO: Container grafana ready: true, restart count 0 May 4 16:27:55.207: INFO: Container prometheus ready: true, restart count 1 May 4 16:27:55.207: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:27:55.207: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 4 16:27:55.207: INFO: simpletest-rc-to-be-deleted-fqp76 started at 2021-05-04 16:27:48 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.207: INFO: Container nginx ready: false, restart count 0 May 4 16:27:55.207: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.207: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 16:27:55.207: INFO: cmk-slg76 started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:27:55.207: INFO: Container nodereport ready: true, restart count 0 May 4 16:27:55.207: INFO: Container reconcile ready: true, restart count 0 May 4 16:27:55.207: INFO: prometheus-operator-5bb8cb9d8f-rrrhf started at 2021-05-04 14:56:03 +0000 UTC (0+2 container statuses recorded) May 4 16:27:55.207: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:27:55.207: INFO: Container prometheus-operator ready: true, restart count 0 May 4 16:27:55.207: INFO: node-exporter-k8qd9 started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:27:55.207: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:27:55.207: INFO: Container node-exporter ready: true, restart count 0 May 4 16:27:55.207: INFO: test-host-network-pod started at 2021-05-04 16:27:39 +0000 UTC (0+2 container statuses recorded) May 4 16:27:55.207: INFO: Container busybox-1 ready: true, restart count 0 May 4 16:27:55.207: INFO: Container busybox-2 ready: true, restart count 0 May 4 16:27:55.207: INFO: kube-flannel-d6pbl started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:27:55.207: INFO: Init container install-cni ready: true, restart count 2 May 4 16:27:55.207: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:27:55.207: INFO: node-feature-discovery-worker-wfgl5 started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.207: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:27:55.207: INFO: simpletest-rc-to-be-deleted-bv44b started at 2021-05-04 16:27:48 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.207: INFO: Container nginx ready: false, restart count 0 May 4 16:27:55.207: INFO: collectd-4755t started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:27:55.207: INFO: Container collectd ready: true, restart count 0 May 4 16:27:55.207: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:27:55.207: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:27:55.207: INFO: fail-once-local-ltx4r started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.207: INFO: Container c ready: false, restart count 0 W0504 16:27:55.220912 38 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:27:55.634: INFO: Latency metrics for node node1 May 4 16:27:55.634: INFO: Logging node info for node node2 May 4 16:27:55.636: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 41567fa7-bb24-4381-9387-e4115195037d 43883 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:98:f5:3b:98:5c"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:53 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:53 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:27:53 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:27:53 +0000 UTC,LastTransitionTime:2021-05-04 14:45:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d47a1c7ab17f44f2ae7ff788700a8d74,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:df3523a1-a74f-4f8b-beb1-29f5ed8699f3,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:27:55.637: INFO: Logging kubelet events for node node2 May 4 16:27:55.639: INFO: Logging pods the kubelet thinks is on node node2 May 4 16:27:55.654: INFO: kube-multus-ds-amd64-7r2s4 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.654: INFO: Container kube-multus ready: true, restart count 1 May 4 16:27:55.654: INFO: cmk-init-discover-node2-zlxzj started at 2021-05-04 14:54:52 +0000 UTC (0+3 container statuses recorded) May 4 16:27:55.654: INFO: Container discover ready: false, restart count 0 May 4 16:27:55.654: INFO: Container init ready: false, restart count 0 May 4 16:27:55.654: INFO: Container install ready: false, restart count 0 May 4 16:27:55.654: INFO: collectd-dhwfp started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:27:55.654: INFO: Container collectd ready: true, restart count 0 May 4 16:27:55.654: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:27:55.654: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:27:55.654: INFO: fail-once-local-bkr6m started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.654: INFO: Container c ready: false, restart count 0 May 4 16:27:55.654: INFO: foo-sxtvr started at 2021-05-04 16:25:24 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.654: INFO: Container c ready: false, restart count 0 May 4 16:27:55.654: INFO: test-pod started at 2021-05-04 16:27:33 +0000 UTC (0+3 container statuses recorded) May 4 16:27:55.654: INFO: Container busybox-1 ready: true, restart count 0 May 4 16:27:55.654: INFO: Container busybox-2 ready: true, restart count 0 May 4 16:27:55.654: INFO: Container busybox-3 ready: true, restart count 0 May 4 16:27:55.654: INFO: downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813 started at 2021-05-04 16:27:45 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.654: INFO: Container dapi-container ready: false, restart count 0 May 4 16:27:55.654: INFO: simpletest-rc-to-be-deleted-l674c started at 2021-05-04 16:27:48 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.654: INFO: Container nginx ready: false, restart count 0 May 4 16:27:55.654: INFO: simpletest-rc-to-be-deleted-bxkqd started at 2021-05-04 16:27:48 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.654: INFO: Container nginx ready: false, restart count 0 May 4 16:27:55.654: INFO: nginx-proxy-node2 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.654: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:27:55.654: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.654: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 4 16:27:55.654: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.654: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:27:55.654: INFO: cmk-2fmbx started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:27:55.654: INFO: Container nodereport ready: true, restart count 0 May 4 16:27:55.654: INFO: Container reconcile ready: true, restart count 0 May 4 16:27:55.654: INFO: pod-exec-websocket-2863438c-c2df-4c3c-9cd1-2b53e8002946 started at 2021-05-04 16:22:49 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.654: INFO: Container main ready: false, restart count 0 May 4 16:27:55.654: INFO: pod-service-account-55278b67-f302-4d69-b992-0113a6bbdd84 started at 2021-05-04 16:26:40 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.654: INFO: Container test ready: false, restart count 0 May 4 16:27:55.654: INFO: kube-flannel-lnwkk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:27:55.654: INFO: Init container install-cni ready: true, restart count 2 May 4 16:27:55.654: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:27:55.654: INFO: cmk-webhook-6c9d5f8578-fr595 started at 2021-05-04 14:55:15 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.654: INFO: Container cmk-webhook ready: true, restart count 0 May 4 16:27:55.654: INFO: var-expansion-792ff743-a8c5-4f3a-94b8-4968bd4cf720 started at 2021-05-04 16:25:54 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.654: INFO: Container dapi-container ready: false, restart count 0 May 4 16:27:55.654: INFO: liveness-exec started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.654: INFO: Container liveness-exec ready: false, restart count 6 May 4 16:27:55.654: INFO: kube-proxy-rfjjf started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.654: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:27:55.654: INFO: node-feature-discovery-worker-jzjqs started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.654: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:27:55.654: INFO: node-exporter-5lghf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:27:55.654: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:27:55.654: INFO: Container node-exporter ready: true, restart count 0 May 4 16:27:55.654: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x started at 2021-05-04 14:59:02 +0000 UTC (0+2 container statuses recorded) May 4 16:27:55.654: INFO: Container tas-controller ready: true, restart count 0 May 4 16:27:55.654: INFO: Container tas-extender ready: true, restart count 0 May 4 16:27:55.654: INFO: foo-9dkvq started at 2021-05-04 16:25:24 +0000 UTC (0+1 container statuses recorded) May 4 16:27:55.654: INFO: Container c ready: false, restart count 0 W0504 16:27:55.666186 38 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:27:55.765: INFO: Latency metrics for node node2 May 4 16:27:55.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9763" for this suite. • Failure [121.482 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:27:54.342: while waiting for pod to be running Unexpected error: <*errors.errorString | 0xc0002801f0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:331 ------------------------------ {"msg":"FAILED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":-1,"completed":21,"skipped":628,"failed":2,"failures":["[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]"]} SSSSSSS ------------------------------ May 4 16:27:55.792: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:27:51.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-6e3862fd-8d71-4f3f-8deb-745cfd40ed13 STEP: Creating a pod to test consume secrets May 4 16:27:51.302: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b8a5b5e3-4702-4eb0-b5de-e380759f490c" in namespace "projected-7309" to be "Succeeded or Failed" May 4 16:27:51.304: INFO: Pod "pod-projected-secrets-b8a5b5e3-4702-4eb0-b5de-e380759f490c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.738014ms May 4 16:27:53.309: INFO: Pod "pod-projected-secrets-b8a5b5e3-4702-4eb0-b5de-e380759f490c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007395316s May 4 16:27:55.313: INFO: Pod "pod-projected-secrets-b8a5b5e3-4702-4eb0-b5de-e380759f490c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011378402s May 4 16:27:57.318: INFO: Pod "pod-projected-secrets-b8a5b5e3-4702-4eb0-b5de-e380759f490c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016249827s May 4 16:27:59.323: INFO: Pod "pod-projected-secrets-b8a5b5e3-4702-4eb0-b5de-e380759f490c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.020937188s STEP: Saw pod success May 4 16:27:59.323: INFO: Pod "pod-projected-secrets-b8a5b5e3-4702-4eb0-b5de-e380759f490c" satisfied condition "Succeeded or Failed" May 4 16:27:59.325: INFO: Trying to get logs from node node1 pod pod-projected-secrets-b8a5b5e3-4702-4eb0-b5de-e380759f490c container projected-secret-volume-test: STEP: delete the pod May 4 16:27:59.342: INFO: Waiting for pod pod-projected-secrets-b8a5b5e3-4702-4eb0-b5de-e380759f490c to disappear May 4 16:27:59.344: INFO: Pod pod-projected-secrets-b8a5b5e3-4702-4eb0-b5de-e380759f490c no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:27:59.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7309" for this suite. • [SLOW TEST:8.088 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:27:49.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6134.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6134.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 4 16:28:01.810: INFO: DNS probes using dns-6134/dns-test-06270070-c356-4d7e-830e-8ea00f5fb735 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:28:01.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6134" for this suite. • [SLOW TEST:12.079 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":-1,"completed":22,"skipped":459,"failed":3,"failures":["[sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]"]} May 4 16:28:01.826: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:27:48.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0504 16:27:58.442755 24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:29:00.461: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. May 4 16:29:00.461: INFO: Deleting pod "simpletest-rc-to-be-deleted-bv44b" in namespace "gc-8390" May 4 16:29:00.468: INFO: Deleting pod "simpletest-rc-to-be-deleted-bxkqd" in namespace "gc-8390" May 4 16:29:00.474: INFO: Deleting pod "simpletest-rc-to-be-deleted-fqp76" in namespace "gc-8390" May 4 16:29:00.480: INFO: Deleting pod "simpletest-rc-to-be-deleted-ktdvd" in namespace "gc-8390" May 4 16:29:00.486: INFO: Deleting pod "simpletest-rc-to-be-deleted-l674c" in namespace "gc-8390" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:29:00.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8390" for this suite. • [SLOW TEST:72.138 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":28,"skipped":603,"failed":5,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","[k8s.io] Pods should be updated [NodeConformance] [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]"]} May 4 16:29:00.502: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:27:03.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod May 4 16:29:04.105: INFO: Successfully updated pod "var-expansion-98b73d79-7107-4138-a06e-af820041f2eb" STEP: waiting for pod running May 4 16:31:04.113: FAIL: while waiting for pod to be running Unexpected error: <*errors.errorString | 0xc000342200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/common.glob..func9.7() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:270 +0x5d1 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002947080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc002947080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc002947080, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "var-expansion-5782". STEP: Found 9 events. May 4 16:31:04.119: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for var-expansion-98b73d79-7107-4138-a06e-af820041f2eb: { } Scheduled: Successfully assigned var-expansion-5782/var-expansion-98b73d79-7107-4138-a06e-af820041f2eb to node1 May 4 16:31:04.119: INFO: At 2021-05-04 16:27:05 +0000 UTC - event for var-expansion-98b73d79-7107-4138-a06e-af820041f2eb: {multus } AddedInterface: Add eth0 [10.244.4.212/24] May 4 16:31:04.119: INFO: At 2021-05-04 16:27:05 +0000 UTC - event for var-expansion-98b73d79-7107-4138-a06e-af820041f2eb: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 4 16:31:04.119: INFO: At 2021-05-04 16:27:06 +0000 UTC - event for var-expansion-98b73d79-7107-4138-a06e-af820041f2eb: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 4 16:31:04.119: INFO: At 2021-05-04 16:27:06 +0000 UTC - event for var-expansion-98b73d79-7107-4138-a06e-af820041f2eb: {kubelet node1} Failed: Error: ErrImagePull May 4 16:31:04.119: INFO: At 2021-05-04 16:27:07 +0000 UTC - event for var-expansion-98b73d79-7107-4138-a06e-af820041f2eb: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 4 16:31:04.119: INFO: At 2021-05-04 16:27:08 +0000 UTC - event for var-expansion-98b73d79-7107-4138-a06e-af820041f2eb: {multus } AddedInterface: Add eth0 [10.244.4.213/24] May 4 16:31:04.119: INFO: At 2021-05-04 16:27:08 +0000 UTC - event for var-expansion-98b73d79-7107-4138-a06e-af820041f2eb: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 4 16:31:04.119: INFO: At 2021-05-04 16:27:08 +0000 UTC - event for var-expansion-98b73d79-7107-4138-a06e-af820041f2eb: {kubelet node1} Failed: Error: ImagePullBackOff May 4 16:31:04.121: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:31:04.121: INFO: var-expansion-98b73d79-7107-4138-a06e-af820041f2eb node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:27:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:27:03 +0000 UTC ContainersNotReady containers with unready status: [dapi-container]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:27:03 +0000 UTC ContainersNotReady containers with unready status: [dapi-container]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:27:03 +0000 UTC }] May 4 16:31:04.121: INFO: May 4 16:31:04.125: INFO: Logging node info for node master1 May 4 16:31:04.127: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 db982204-549e-4532-90a7-a4410878cfc9 44937 0 2021-05-04 14:43:01 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"3e:f0:43:cb:66:52"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-04 14:51:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:46 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:31:01 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:31:01 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:31:01 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:31:01 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:88a0771919594d4187f6704fc7592bf8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8e0a253b-2aa4-4467-879e-567e7ba1ffa4,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:31:04.127: INFO: Logging kubelet events for node master1 May 4 16:31:04.129: INFO: Logging pods the kubelet thinks is on node master1 May 4 16:31:04.144: INFO: kube-flannel-qspzk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:31:04.144: INFO: Init container install-cni ready: true, restart count 0 May 4 16:31:04.144: INFO: Container kube-flannel ready: true, restart count 3 May 4 16:31:04.144: INFO: kube-multus-ds-amd64-jflvf started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.144: INFO: Container kube-multus ready: true, restart count 1 May 4 16:31:04.144: INFO: coredns-7677f9bb54-qvcd2 started at 2021-05-04 14:46:11 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.144: INFO: Container coredns ready: true, restart count 1 May 4 16:31:04.144: INFO: node-feature-discovery-controller-5bf5c49849-72rn6 started at 2021-05-04 14:51:52 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.144: INFO: Container nfd-controller ready: true, restart count 0 May 4 16:31:04.144: INFO: kube-apiserver-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.144: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:31:04.144: INFO: kube-controller-manager-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.144: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:31:04.144: INFO: kube-proxy-8j6ch started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.144: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:31:04.144: INFO: docker-registry-docker-registry-56cbc7bc58-zhf8t started at 2021-05-04 14:48:42 +0000 UTC (0+2 container statuses recorded) May 4 16:31:04.144: INFO: Container docker-registry ready: true, restart count 0 May 4 16:31:04.144: INFO: Container nginx ready: true, restart count 0 May 4 16:31:04.144: INFO: node-exporter-jckjs started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:31:04.144: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:31:04.144: INFO: Container node-exporter ready: true, restart count 0 May 4 16:31:04.144: INFO: kube-scheduler-master1 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.144: INFO: Container kube-scheduler ready: true, restart count 0 W0504 16:31:04.158239 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:31:04.191: INFO: Latency metrics for node master1 May 4 16:31:04.191: INFO: Logging node info for node master2 May 4 16:31:04.193: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 e2c15170-247b-4e7b-b818-abc807948bf8 44936 0 2021-05-04 14:43:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:e0:10:a0:e0:62"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:32 +0000 UTC,LastTransitionTime:2021-05-04 14:47:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:31:01 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:31:01 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:31:01 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:31:01 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6af568f56589422a9bd68e0270ce0f8c,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:bf27bb77-fad2-4b52-85c3-acb5113fc512,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:31:04.194: INFO: Logging kubelet events for node master2 May 4 16:31:04.196: INFO: Logging pods the kubelet thinks is on node master2 May 4 16:31:04.211: INFO: node-exporter-9c6qf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:31:04.212: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:31:04.212: INFO: Container node-exporter ready: true, restart count 0 May 4 16:31:04.212: INFO: kube-apiserver-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.212: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:31:04.212: INFO: kube-controller-manager-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.212: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:31:04.212: INFO: kube-scheduler-master2 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.212: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:31:04.212: INFO: kube-proxy-6b5t8 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.212: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:31:04.212: INFO: kube-flannel-cxdfr started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:31:04.212: INFO: Init container install-cni ready: true, restart count 0 May 4 16:31:04.212: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:31:04.212: INFO: kube-multus-ds-amd64-dw8tg started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.212: INFO: Container kube-multus ready: true, restart count 1 May 4 16:31:04.212: INFO: dns-autoscaler-5b7b5c9b6f-zbrsq started at 2021-05-04 14:46:08 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.212: INFO: Container autoscaler ready: true, restart count 1 W0504 16:31:04.223214 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:31:04.248: INFO: Latency metrics for node master2 May 4 16:31:04.248: INFO: Logging node info for node master3 May 4 16:31:04.251: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 b533a646-667e-403c-944b-71dec9cc4851 44934 0 2021-05-04 14:43:51 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:b0:53:14:f6:c9"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:48:50 +0000 UTC,LastTransitionTime:2021-05-04 14:48:50 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:31:00 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:31:00 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:31:00 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:31:00 +0000 UTC,LastTransitionTime:2021-05-04 14:45:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:959373dcda56494486f0c2bb0bb496cc,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:0714ca81-c21e-40d6-a288-48d597238e54,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:31:04.251: INFO: Logging kubelet events for node master3 May 4 16:31:04.254: INFO: Logging pods the kubelet thinks is on node master3 May 4 16:31:04.268: INFO: kube-flannel-wznt8 started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:31:04.268: INFO: Init container install-cni ready: true, restart count 0 May 4 16:31:04.268: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:31:04.268: INFO: kube-multus-ds-amd64-cgwz2 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.268: INFO: Container kube-multus ready: true, restart count 1 May 4 16:31:04.268: INFO: coredns-7677f9bb54-pshfb started at 2021-05-04 14:46:06 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.268: INFO: Container coredns ready: true, restart count 1 May 4 16:31:04.268: INFO: node-exporter-wvppn started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:31:04.268: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:31:04.268: INFO: Container node-exporter ready: true, restart count 0 May 4 16:31:04.268: INFO: kube-apiserver-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.268: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:31:04.268: INFO: kube-controller-manager-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.268: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:31:04.268: INFO: kube-scheduler-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.268: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:31:04.269: INFO: kube-proxy-2p5b6 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.269: INFO: Container kube-proxy ready: true, restart count 2 W0504 16:31:04.281926 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:31:04.306: INFO: Latency metrics for node master3 May 4 16:31:04.306: INFO: Logging node info for node node1 May 4 16:31:04.309: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 b8cf7e16-d5c7-4e2c-996a-93d93bd4fa1c 44921 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"e2:50:df:03:d2:13"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:54:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:30:57 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:30:57 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:30:57 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:30:57 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bffc023a4ab84df0b0181bc7b8f509e2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:dc08af72-abca-4f1d-bd0f-0e8d8eb97de5,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[@ :],SizeBytes:1002569035,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:371dc6bf7e0c7ce112a29341b000c40d840aef1dbb4fdcb3ae5c0597e28f3061 golang:alpine3.12],SizeBytes:301097267,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:31:04.310: INFO: Logging kubelet events for node node1 May 4 16:31:04.312: INFO: Logging pods the kubelet thinks is on node node1 May 4 16:31:04.333: INFO: kube-multus-ds-amd64-pkmbz started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.333: INFO: Container kube-multus ready: true, restart count 1 May 4 16:31:04.333: INFO: cmk-slg76 started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:31:04.333: INFO: Container nodereport ready: true, restart count 0 May 4 16:31:04.333: INFO: Container reconcile ready: true, restart count 0 May 4 16:31:04.333: INFO: prometheus-k8s-0 started at 2021-05-04 14:56:12 +0000 UTC (0+5 container statuses recorded) May 4 16:31:04.333: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:31:04.333: INFO: Container grafana ready: true, restart count 0 May 4 16:31:04.333: INFO: Container prometheus ready: true, restart count 1 May 4 16:31:04.333: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:31:04.333: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 4 16:31:04.333: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.333: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 16:31:04.333: INFO: node-feature-discovery-worker-wfgl5 started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.333: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:31:04.333: INFO: prometheus-operator-5bb8cb9d8f-rrrhf started at 2021-05-04 14:56:03 +0000 UTC (0+2 container statuses recorded) May 4 16:31:04.333: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:31:04.333: INFO: Container prometheus-operator ready: true, restart count 0 May 4 16:31:04.333: INFO: node-exporter-k8qd9 started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:31:04.333: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:31:04.333: INFO: Container node-exporter ready: true, restart count 0 May 4 16:31:04.333: INFO: kube-flannel-d6pbl started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:31:04.333: INFO: Init container install-cni ready: true, restart count 2 May 4 16:31:04.333: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:31:04.333: INFO: fail-once-local-ltx4r started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.333: INFO: Container c ready: false, restart count 0 May 4 16:31:04.333: INFO: collectd-4755t started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:31:04.333: INFO: Container collectd ready: true, restart count 0 May 4 16:31:04.333: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:31:04.333: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:31:04.333: INFO: busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0 started at 2021-05-04 16:26:47 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.333: INFO: Container busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0 ready: false, restart count 0 May 4 16:31:04.333: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.333: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:31:04.333: INFO: kube-proxy-t2mbn started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.333: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:31:04.333: INFO: liveness-http started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.333: INFO: Container liveness-http ready: false, restart count 21 May 4 16:31:04.333: INFO: nginx-proxy-node1 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.333: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:31:04.333: INFO: var-expansion-98b73d79-7107-4138-a06e-af820041f2eb started at 2021-05-04 16:27:03 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.333: INFO: Container dapi-container ready: false, restart count 0 May 4 16:31:04.333: INFO: cmk-init-discover-node1-m8vvw started at 2021-05-04 14:54:32 +0000 UTC (0+3 container statuses recorded) May 4 16:31:04.333: INFO: Container discover ready: false, restart count 0 May 4 16:31:04.333: INFO: Container init ready: false, restart count 0 May 4 16:31:04.333: INFO: Container install ready: false, restart count 0 W0504 16:31:04.345166 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:31:04.393: INFO: Latency metrics for node node1 May 4 16:31:04.393: INFO: Logging node info for node node2 May 4 16:31:04.396: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 41567fa7-bb24-4381-9387-e4115195037d 44914 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:98:f5:3b:98:5c"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:30:54 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:30:54 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:30:54 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:30:54 +0000 UTC,LastTransitionTime:2021-05-04 14:45:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d47a1c7ab17f44f2ae7ff788700a8d74,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:df3523a1-a74f-4f8b-beb1-29f5ed8699f3,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:31:04.397: INFO: Logging kubelet events for node node2 May 4 16:31:04.398: INFO: Logging pods the kubelet thinks is on node node2 May 4 16:31:04.419: INFO: cmk-webhook-6c9d5f8578-fr595 started at 2021-05-04 14:55:15 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.419: INFO: Container cmk-webhook ready: true, restart count 0 May 4 16:31:04.419: INFO: kube-flannel-lnwkk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:31:04.419: INFO: Init container install-cni ready: true, restart count 2 May 4 16:31:04.419: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:31:04.419: INFO: node-feature-discovery-worker-jzjqs started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.419: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:31:04.419: INFO: node-exporter-5lghf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:31:04.419: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:31:04.419: INFO: Container node-exporter ready: true, restart count 0 May 4 16:31:04.419: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x started at 2021-05-04 14:59:02 +0000 UTC (0+2 container statuses recorded) May 4 16:31:04.419: INFO: Container tas-controller ready: true, restart count 0 May 4 16:31:04.419: INFO: Container tas-extender ready: true, restart count 0 May 4 16:31:04.419: INFO: foo-9dkvq started at 2021-05-04 16:25:24 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.419: INFO: Container c ready: false, restart count 0 May 4 16:31:04.419: INFO: liveness-exec started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.419: INFO: Container liveness-exec ready: false, restart count 6 May 4 16:31:04.419: INFO: kube-proxy-rfjjf started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.419: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:31:04.419: INFO: kube-multus-ds-amd64-7r2s4 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.419: INFO: Container kube-multus ready: true, restart count 1 May 4 16:31:04.419: INFO: collectd-dhwfp started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:31:04.419: INFO: Container collectd ready: true, restart count 0 May 4 16:31:04.419: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:31:04.419: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:31:04.419: INFO: fail-once-local-bkr6m started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.419: INFO: Container c ready: false, restart count 0 May 4 16:31:04.419: INFO: cmk-init-discover-node2-zlxzj started at 2021-05-04 14:54:52 +0000 UTC (0+3 container statuses recorded) May 4 16:31:04.419: INFO: Container discover ready: false, restart count 0 May 4 16:31:04.419: INFO: Container init ready: false, restart count 0 May 4 16:31:04.419: INFO: Container install ready: false, restart count 0 May 4 16:31:04.419: INFO: downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813 started at 2021-05-04 16:27:45 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.419: INFO: Container dapi-container ready: false, restart count 0 May 4 16:31:04.419: INFO: foo-sxtvr started at 2021-05-04 16:25:24 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.419: INFO: Container c ready: false, restart count 0 May 4 16:31:04.419: INFO: nginx-proxy-node2 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.419: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:31:04.419: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.419: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:31:04.419: INFO: cmk-2fmbx started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:31:04.419: INFO: Container nodereport ready: true, restart count 0 May 4 16:31:04.419: INFO: Container reconcile ready: true, restart count 0 May 4 16:31:04.419: INFO: pod-service-account-55278b67-f302-4d69-b992-0113a6bbdd84 started at 2021-05-04 16:26:40 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.419: INFO: Container test ready: false, restart count 0 May 4 16:31:04.419: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:31:04.419: INFO: Container kubernetes-dashboard ready: true, restart count 1 W0504 16:31:04.433211 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:31:04.463: INFO: Latency metrics for node node2 May 4 16:31:04.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5782" for this suite. • Failure [240.916 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:31:04.113: while waiting for pod to be running Unexpected error: <*errors.errorString | 0xc000342200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:270 ------------------------------ {"msg":"FAILED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":-1,"completed":17,"skipped":489,"failed":5,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","[k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","[k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","[k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]"]} May 4 16:31:04.477: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:26:40.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token May 4 16:31:40.738: FAIL: Unexpected error: <*errors.errorString | 0xc0002821f0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/auth.glob..func6.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:230 +0x755 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0015fcd80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc0015fcd80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc0015fcd80, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "svcaccounts-1079". STEP: Found 9 events. May 4 16:31:40.744: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-service-account-55278b67-f302-4d69-b992-0113a6bbdd84: { } Scheduled: Successfully assigned svcaccounts-1079/pod-service-account-55278b67-f302-4d69-b992-0113a6bbdd84 to node2 May 4 16:31:40.744: INFO: At 2021-05-04 16:26:42 +0000 UTC - event for pod-service-account-55278b67-f302-4d69-b992-0113a6bbdd84: {multus } AddedInterface: Add eth0 [10.244.3.11/24] May 4 16:31:40.744: INFO: At 2021-05-04 16:26:42 +0000 UTC - event for pod-service-account-55278b67-f302-4d69-b992-0113a6bbdd84: {kubelet node2} Pulling: Pulling image "docker.io/library/busybox:1.29" May 4 16:31:40.744: INFO: At 2021-05-04 16:26:43 +0000 UTC - event for pod-service-account-55278b67-f302-4d69-b992-0113a6bbdd84: {kubelet node2} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 4 16:31:40.744: INFO: At 2021-05-04 16:26:43 +0000 UTC - event for pod-service-account-55278b67-f302-4d69-b992-0113a6bbdd84: {kubelet node2} Failed: Error: ErrImagePull May 4 16:31:40.744: INFO: At 2021-05-04 16:26:44 +0000 UTC - event for pod-service-account-55278b67-f302-4d69-b992-0113a6bbdd84: {kubelet node2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 4 16:31:40.744: INFO: At 2021-05-04 16:26:45 +0000 UTC - event for pod-service-account-55278b67-f302-4d69-b992-0113a6bbdd84: {multus } AddedInterface: Add eth0 [10.244.3.12/24] May 4 16:31:40.744: INFO: At 2021-05-04 16:26:45 +0000 UTC - event for pod-service-account-55278b67-f302-4d69-b992-0113a6bbdd84: {kubelet node2} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 4 16:31:40.744: INFO: At 2021-05-04 16:26:45 +0000 UTC - event for pod-service-account-55278b67-f302-4d69-b992-0113a6bbdd84: {kubelet node2} Failed: Error: ImagePullBackOff May 4 16:31:40.746: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:31:40.746: INFO: pod-service-account-55278b67-f302-4d69-b992-0113a6bbdd84 node2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:26:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:26:40 +0000 UTC ContainersNotReady containers with unready status: [test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:26:40 +0000 UTC ContainersNotReady containers with unready status: [test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:26:40 +0000 UTC }] May 4 16:31:40.746: INFO: May 4 16:31:40.750: INFO: Logging node info for node master1 May 4 16:31:40.752: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 db982204-549e-4532-90a7-a4410878cfc9 45072 0 2021-05-04 14:43:01 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"3e:f0:43:cb:66:52"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-04 14:51:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:46 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:31:31 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:31:31 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:31:31 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:31:31 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:88a0771919594d4187f6704fc7592bf8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8e0a253b-2aa4-4467-879e-567e7ba1ffa4,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:31:40.753: INFO: Logging kubelet events for node master1 May 4 16:31:40.755: INFO: Logging pods the kubelet thinks is on node master1 May 4 16:31:40.764: INFO: kube-multus-ds-amd64-jflvf started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:31:40.764: INFO: Container kube-multus ready: true, restart count 1 May 4 16:31:40.764: INFO: coredns-7677f9bb54-qvcd2 started at 2021-05-04 14:46:11 +0000 UTC (0+1 container statuses recorded) May 4 16:31:40.764: INFO: Container coredns ready: true, restart count 1 May 4 16:31:40.764: INFO: node-feature-discovery-controller-5bf5c49849-72rn6 started at 2021-05-04 14:51:52 +0000 UTC (0+1 container statuses recorded) May 4 16:31:40.764: INFO: Container nfd-controller ready: true, restart count 0 May 4 16:31:40.764: INFO: kube-flannel-qspzk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:31:40.764: INFO: Init container install-cni ready: true, restart count 0 May 4 16:31:40.764: INFO: Container kube-flannel ready: true, restart count 3 May 4 16:31:40.764: INFO: kube-proxy-8j6ch started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:31:40.764: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:31:40.764: INFO: docker-registry-docker-registry-56cbc7bc58-zhf8t started at 2021-05-04 14:48:42 +0000 UTC (0+2 container statuses recorded) May 4 16:31:40.764: INFO: Container docker-registry ready: true, restart count 0 May 4 16:31:40.764: INFO: Container nginx ready: true, restart count 0 May 4 16:31:40.764: INFO: node-exporter-jckjs started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:31:40.764: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:31:40.765: INFO: Container node-exporter ready: true, restart count 0 May 4 16:31:40.765: INFO: kube-scheduler-master1 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:31:40.765: INFO: Container kube-scheduler ready: true, restart count 0 May 4 16:31:40.765: INFO: kube-apiserver-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:31:40.765: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:31:40.765: INFO: kube-controller-manager-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:31:40.765: INFO: Container kube-controller-manager ready: true, restart count 2 W0504 16:31:40.776972 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:31:40.800: INFO: Latency metrics for node master1 May 4 16:31:40.800: INFO: Logging node info for node master2 May 4 16:31:40.803: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 e2c15170-247b-4e7b-b818-abc807948bf8 45071 0 2021-05-04 14:43:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:e0:10:a0:e0:62"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:32 +0000 UTC,LastTransitionTime:2021-05-04 14:47:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:31:31 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:31:31 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:31:31 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:31:31 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6af568f56589422a9bd68e0270ce0f8c,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:bf27bb77-fad2-4b52-85c3-acb5113fc512,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:31:40.804: INFO: Logging kubelet events for node master2 May 4 16:31:40.805: INFO: Logging pods the kubelet thinks is on node master2 May 4 16:31:40.813: INFO: kube-multus-ds-amd64-dw8tg started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:31:40.813: INFO: Container kube-multus ready: true, restart count 1 May 4 16:31:40.813: INFO: dns-autoscaler-5b7b5c9b6f-zbrsq started at 2021-05-04 14:46:08 +0000 UTC (0+1 container statuses recorded) May 4 16:31:40.813: INFO: Container autoscaler ready: true, restart count 1 May 4 16:31:40.813: INFO: node-exporter-9c6qf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:31:40.813: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:31:40.813: INFO: Container node-exporter ready: true, restart count 0 May 4 16:31:40.813: INFO: kube-apiserver-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:31:40.813: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:31:40.813: INFO: kube-controller-manager-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:31:40.813: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:31:40.813: INFO: kube-scheduler-master2 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:31:40.813: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:31:40.813: INFO: kube-proxy-6b5t8 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:31:40.813: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:31:40.813: INFO: kube-flannel-cxdfr started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:31:40.813: INFO: Init container install-cni ready: true, restart count 0 May 4 16:31:40.813: INFO: Container kube-flannel ready: true, restart count 1 W0504 16:31:40.828222 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:31:40.852: INFO: Latency metrics for node master2 May 4 16:31:40.852: INFO: Logging node info for node master3 May 4 16:31:40.854: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 b533a646-667e-403c-944b-71dec9cc4851 45070 0 2021-05-04 14:43:51 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:b0:53:14:f6:c9"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:48:50 +0000 UTC,LastTransitionTime:2021-05-04 14:48:50 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:31:31 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:31:31 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:31:31 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:31:31 +0000 UTC,LastTransitionTime:2021-05-04 14:45:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:959373dcda56494486f0c2bb0bb496cc,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:0714ca81-c21e-40d6-a288-48d597238e54,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:31:40.855: INFO: Logging kubelet events for node master3 May 4 16:31:40.858: INFO: Logging pods the kubelet thinks is on node master3 May 4 16:31:40.865: INFO: node-exporter-wvppn started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:31:40.865: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:31:40.865: INFO: Container node-exporter ready: true, restart count 0 May 4 16:31:40.865: INFO: kube-apiserver-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:31:40.866: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:31:40.866: INFO: kube-controller-manager-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:31:40.866: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:31:40.866: INFO: kube-scheduler-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:31:40.866: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:31:40.866: INFO: kube-proxy-2p5b6 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:31:40.866: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:31:40.866: INFO: kube-flannel-wznt8 started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:31:40.866: INFO: Init container install-cni ready: true, restart count 0 May 4 16:31:40.866: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:31:40.866: INFO: kube-multus-ds-amd64-cgwz2 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:31:40.866: INFO: Container kube-multus ready: true, restart count 1 May 4 16:31:40.866: INFO: coredns-7677f9bb54-pshfb started at 2021-05-04 14:46:06 +0000 UTC (0+1 container statuses recorded) May 4 16:31:40.866: INFO: Container coredns ready: true, restart count 1 W0504 16:31:40.880186 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:31:40.911: INFO: Latency metrics for node master3 May 4 16:31:40.911: INFO: Logging node info for node node1 May 4 16:31:40.914: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 b8cf7e16-d5c7-4e2c-996a-93d93bd4fa1c 45091 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"e2:50:df:03:d2:13"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:54:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:31:37 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:31:37 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:31:37 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:31:37 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bffc023a4ab84df0b0181bc7b8f509e2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:dc08af72-abca-4f1d-bd0f-0e8d8eb97de5,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[@ :],SizeBytes:1002569035,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:371dc6bf7e0c7ce112a29341b000c40d840aef1dbb4fdcb3ae5c0597e28f3061 golang:alpine3.12],SizeBytes:301097267,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:31:40.914: INFO: Logging kubelet events for node node1 May 4 16:31:40.916: INFO: Logging pods the kubelet thinks is on node node1 May 4 16:31:40.931: INFO: kube-flannel-d6pbl started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:31:40.931: INFO: Init container install-cni ready: true, restart count 2 May 4 16:31:40.931: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:31:40.931: INFO: node-feature-discovery-worker-wfgl5 started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:31:40.931: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:31:40.931: INFO: prometheus-operator-5bb8cb9d8f-rrrhf started at 2021-05-04 14:56:03 +0000 UTC (0+2 container statuses recorded) May 4 16:31:40.931: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:31:40.931: INFO: Container prometheus-operator ready: true, restart count 0 May 4 16:31:40.931: INFO: node-exporter-k8qd9 started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:31:40.931: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:31:40.931: INFO: Container node-exporter ready: true, restart count 0 May 4 16:31:40.931: INFO: collectd-4755t started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:31:40.931: INFO: Container collectd ready: true, restart count 0 May 4 16:31:40.931: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:31:40.931: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:31:40.931: INFO: fail-once-local-ltx4r started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:31:40.931: INFO: Container c ready: false, restart count 0 May 4 16:31:40.931: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:31:40.931: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:31:40.931: INFO: busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0 started at 2021-05-04 16:26:47 +0000 UTC (0+1 container statuses recorded) May 4 16:31:40.931: INFO: Container busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0 ready: false, restart count 0 May 4 16:31:40.931: INFO: nginx-proxy-node1 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:31:40.931: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:31:40.931: INFO: kube-proxy-t2mbn started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:31:40.931: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:31:40.931: INFO: liveness-http started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:31:40.931: INFO: Container liveness-http ready: false, restart count 21 May 4 16:31:40.931: INFO: cmk-init-discover-node1-m8vvw started at 2021-05-04 14:54:32 +0000 UTC (0+3 container statuses recorded) May 4 16:31:40.931: INFO: Container discover ready: false, restart count 0 May 4 16:31:40.931: INFO: Container init ready: false, restart count 0 May 4 16:31:40.931: INFO: Container install ready: false, restart count 0 May 4 16:31:40.931: INFO: kube-multus-ds-amd64-pkmbz started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:31:40.931: INFO: Container kube-multus ready: true, restart count 1 May 4 16:31:40.931: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:31:40.931: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 16:31:40.931: INFO: cmk-slg76 started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:31:40.931: INFO: Container nodereport ready: true, restart count 0 May 4 16:31:40.931: INFO: Container reconcile ready: true, restart count 0 May 4 16:31:40.931: INFO: prometheus-k8s-0 started at 2021-05-04 14:56:12 +0000 UTC (0+5 container statuses recorded) May 4 16:31:40.931: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:31:40.931: INFO: Container grafana ready: true, restart count 0 May 4 16:31:40.931: INFO: Container prometheus ready: true, restart count 1 May 4 16:31:40.931: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:31:40.931: INFO: Container rules-configmap-reloader ready: true, restart count 0 W0504 16:31:40.945033 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:31:40.997: INFO: Latency metrics for node node1 May 4 16:31:40.997: INFO: Logging node info for node node2 May 4 16:31:41.000: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 41567fa7-bb24-4381-9387-e4115195037d 45083 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:98:f5:3b:98:5c"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:31:34 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:31:34 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:31:34 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:31:34 +0000 UTC,LastTransitionTime:2021-05-04 14:45:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d47a1c7ab17f44f2ae7ff788700a8d74,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:df3523a1-a74f-4f8b-beb1-29f5ed8699f3,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:31:41.001: INFO: Logging kubelet events for node node2 May 4 16:31:41.003: INFO: Logging pods the kubelet thinks is on node node2 May 4 16:31:41.016: INFO: kube-flannel-lnwkk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:31:41.016: INFO: Init container install-cni ready: true, restart count 2 May 4 16:31:41.016: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:31:41.016: INFO: cmk-webhook-6c9d5f8578-fr595 started at 2021-05-04 14:55:15 +0000 UTC (0+1 container statuses recorded) May 4 16:31:41.016: INFO: Container cmk-webhook ready: true, restart count 0 May 4 16:31:41.016: INFO: kube-proxy-rfjjf started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:31:41.016: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:31:41.016: INFO: node-feature-discovery-worker-jzjqs started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:31:41.016: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:31:41.016: INFO: node-exporter-5lghf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:31:41.016: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:31:41.016: INFO: Container node-exporter ready: true, restart count 0 May 4 16:31:41.016: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x started at 2021-05-04 14:59:02 +0000 UTC (0+2 container statuses recorded) May 4 16:31:41.016: INFO: Container tas-controller ready: true, restart count 0 May 4 16:31:41.016: INFO: Container tas-extender ready: true, restart count 0 May 4 16:31:41.016: INFO: foo-9dkvq started at 2021-05-04 16:25:24 +0000 UTC (0+1 container statuses recorded) May 4 16:31:41.016: INFO: Container c ready: false, restart count 0 May 4 16:31:41.016: INFO: liveness-exec started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:31:41.016: INFO: Container liveness-exec ready: false, restart count 6 May 4 16:31:41.016: INFO: kube-multus-ds-amd64-7r2s4 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:31:41.016: INFO: Container kube-multus ready: true, restart count 1 May 4 16:31:41.016: INFO: cmk-init-discover-node2-zlxzj started at 2021-05-04 14:54:52 +0000 UTC (0+3 container statuses recorded) May 4 16:31:41.016: INFO: Container discover ready: false, restart count 0 May 4 16:31:41.016: INFO: Container init ready: false, restart count 0 May 4 16:31:41.016: INFO: Container install ready: false, restart count 0 May 4 16:31:41.016: INFO: collectd-dhwfp started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:31:41.016: INFO: Container collectd ready: true, restart count 0 May 4 16:31:41.016: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:31:41.016: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:31:41.016: INFO: fail-once-local-bkr6m started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:31:41.016: INFO: Container c ready: false, restart count 0 May 4 16:31:41.016: INFO: foo-sxtvr started at 2021-05-04 16:25:24 +0000 UTC (0+1 container statuses recorded) May 4 16:31:41.016: INFO: Container c ready: false, restart count 0 May 4 16:31:41.016: INFO: downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813 started at 2021-05-04 16:27:45 +0000 UTC (0+1 container statuses recorded) May 4 16:31:41.016: INFO: Container dapi-container ready: false, restart count 0 May 4 16:31:41.016: INFO: nginx-proxy-node2 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:31:41.016: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:31:41.016: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:31:41.016: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 4 16:31:41.016: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:31:41.016: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:31:41.016: INFO: cmk-2fmbx started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:31:41.016: INFO: Container nodereport ready: true, restart count 0 May 4 16:31:41.016: INFO: Container reconcile ready: true, restart count 0 May 4 16:31:41.016: INFO: pod-service-account-55278b67-f302-4d69-b992-0113a6bbdd84 started at 2021-05-04 16:26:40 +0000 UTC (0+1 container statuses recorded) May 4 16:31:41.016: INFO: Container test ready: false, restart count 0 W0504 16:31:41.031230 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:31:41.075: INFO: Latency metrics for node node2 May 4 16:31:41.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1079" for this suite. • Failure [300.893 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:31:40.738: Unexpected error: <*errors.errorString | 0xc0002821f0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:230 ------------------------------ {"msg":"FAILED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":-1,"completed":32,"skipped":464,"failed":4,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","[sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","[sig-auth] ServiceAccounts should mount an API token into pods [Conformance]"]} May 4 16:31:41.088: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:26:47.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:26:47.596: INFO: Waiting up to 5m0s for pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0" in namespace "security-context-test-551" to be "Succeeded or Failed" May 4 16:26:47.601: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.523725ms May 4 16:26:49.604: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007876717s May 4 16:26:51.607: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010810066s May 4 16:26:53.612: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015379257s May 4 16:26:55.617: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020351494s May 4 16:26:57.621: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.024771984s May 4 16:26:59.624: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 12.027957994s May 4 16:27:01.630: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 14.033551155s May 4 16:27:03.633: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 16.036464052s May 4 16:27:05.637: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 18.04004105s May 4 16:27:07.641: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 20.044348521s May 4 16:27:09.645: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 22.048091067s May 4 16:27:11.649: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 24.052205122s May 4 16:27:13.654: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 26.057349694s May 4 16:27:15.659: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 28.062750842s May 4 16:27:17.663: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 30.066326884s May 4 16:27:19.665: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 32.068816686s May 4 16:27:21.669: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 34.072760725s May 4 16:27:23.673: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 36.076850225s May 4 16:27:25.677: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 38.080338529s May 4 16:27:27.681: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 40.084091089s May 4 16:27:29.685: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 42.08814622s May 4 16:27:31.688: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 44.091663747s May 4 16:27:33.693: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 46.096878262s May 4 16:27:35.696: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 48.099951577s May 4 16:27:37.700: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 50.103893601s May 4 16:27:39.704: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 52.107330701s May 4 16:27:41.707: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 54.110409459s May 4 16:27:43.710: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 56.113097246s May 4 16:27:45.713: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 58.116284494s May 4 16:27:47.716: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.119933712s May 4 16:27:49.719: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.122916938s May 4 16:27:51.723: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.126316523s May 4 16:27:53.728: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.13101887s May 4 16:27:55.732: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.135458128s May 4 16:27:57.736: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.13929055s May 4 16:27:59.739: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.142657506s May 4 16:28:01.743: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.146345664s May 4 16:28:03.746: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.148993397s May 4 16:28:05.749: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.152231121s May 4 16:28:07.751: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.154795418s May 4 16:28:09.754: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.157832872s May 4 16:28:11.758: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.161121018s May 4 16:28:13.761: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.164249976s May 4 16:28:15.763: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.166793481s May 4 16:28:17.767: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.170784375s May 4 16:28:19.771: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.174188698s May 4 16:28:21.775: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.178067975s May 4 16:28:23.778: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.181750385s May 4 16:28:25.781: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.184358823s May 4 16:28:27.783: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.186772157s May 4 16:28:29.787: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.190329379s May 4 16:28:31.790: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.193721873s May 4 16:28:33.793: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.196436071s May 4 16:28:35.795: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.198871985s May 4 16:28:37.798: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.201356702s May 4 16:28:39.801: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.204587426s May 4 16:28:41.805: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.208244506s May 4 16:28:43.807: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.210861735s May 4 16:28:45.812: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.215171145s May 4 16:28:47.815: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.218345966s May 4 16:28:49.818: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.22179537s May 4 16:28:51.821: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.224628283s May 4 16:28:53.825: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.228779259s May 4 16:28:55.828: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.231808092s May 4 16:28:57.832: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.235067744s May 4 16:28:59.835: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.238251876s May 4 16:29:01.838: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.24136092s May 4 16:29:03.841: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.244597744s May 4 16:29:05.843: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.246843232s May 4 16:29:07.848: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.251210311s May 4 16:29:09.850: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.253875254s May 4 16:29:11.853: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.25664678s May 4 16:29:13.857: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.260796626s May 4 16:29:15.861: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.264126905s May 4 16:29:17.864: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.26698855s May 4 16:29:19.867: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.270711191s May 4 16:29:21.871: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.274418534s May 4 16:29:23.874: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.277491128s May 4 16:29:25.878: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.281237389s May 4 16:29:27.880: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.283910901s May 4 16:29:29.885: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.288379828s May 4 16:29:31.888: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.291092556s May 4 16:29:33.890: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.293944287s May 4 16:29:35.893: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.296474635s May 4 16:29:37.896: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.299721936s May 4 16:29:39.901: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.304606395s May 4 16:29:41.904: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.307358862s May 4 16:29:43.908: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.311282038s May 4 16:29:45.912: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.315234839s May 4 16:29:47.916: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.319779596s May 4 16:29:49.920: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.323129402s May 4 16:29:51.924: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.327684605s May 4 16:29:53.929: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.332263866s May 4 16:29:55.932: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.335650377s May 4 16:29:57.935: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.338943736s May 4 16:29:59.940: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.343958134s May 4 16:30:01.945: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.348497981s May 4 16:30:03.948: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.351720468s May 4 16:30:05.951: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.354480328s May 4 16:30:07.954: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.357881701s May 4 16:30:09.959: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.362091134s May 4 16:30:11.962: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.365118858s May 4 16:30:13.967: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.370899645s May 4 16:30:15.970: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.37360269s May 4 16:30:17.973: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.376351762s May 4 16:30:19.977: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.380153924s May 4 16:30:21.981: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.384397312s May 4 16:30:23.985: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.3887134s May 4 16:30:25.990: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.3937412s May 4 16:30:27.994: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.397473369s May 4 16:30:29.999: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.402003986s May 4 16:30:32.004: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.407872307s May 4 16:30:34.009: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.412431411s May 4 16:30:36.012: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.415911966s May 4 16:30:38.015: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.418875396s May 4 16:30:40.020: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.423613718s May 4 16:30:42.023: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.426232403s May 4 16:30:44.030: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.43323436s May 4 16:30:46.034: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.43725047s May 4 16:30:48.038: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.441269256s May 4 16:30:50.042: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.445664953s May 4 16:30:52.046: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.44941792s May 4 16:30:54.050: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.453201335s May 4 16:30:56.053: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.456151276s May 4 16:30:58.057: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.46079801s May 4 16:31:00.061: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.464916636s May 4 16:31:02.066: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.469101462s May 4 16:31:04.071: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.473979027s May 4 16:31:06.074: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.477822205s May 4 16:31:08.079: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.482188164s May 4 16:31:10.083: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.486307559s May 4 16:31:12.089: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.492451338s May 4 16:31:14.094: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.49765606s May 4 16:31:16.098: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.501342734s May 4 16:31:18.101: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.504937617s May 4 16:31:20.105: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.508068221s May 4 16:31:22.109: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.512616842s May 4 16:31:24.112: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.51569752s May 4 16:31:26.116: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.519064634s May 4 16:31:28.118: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.521961358s May 4 16:31:30.123: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.526159763s May 4 16:31:32.126: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.529771254s May 4 16:31:34.131: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.534570955s May 4 16:31:36.134: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.537969984s May 4 16:31:38.139: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.542326258s May 4 16:31:40.144: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.54766625s May 4 16:31:42.148: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.551182031s May 4 16:31:44.151: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.554453826s May 4 16:31:46.155: INFO: Pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.558961555s May 4 16:31:48.157: FAIL: wait for pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0" to succeed Expected success, but got an error: <*errors.errorString | 0xc003c051b0>: { s: "Gave up after waiting 5m0s for pod \"busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0\" to be \"Succeeded or Failed\"", } Gave up after waiting 5m0s for pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0" to be "Succeeded or Failed" Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*PodClient).WaitForSuccess(0xc0047942e0, 0xc0039af340, 0x37, 0x45d964b800) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:212 +0x2bb k8s.io/kubernetes/test/e2e/common.glob..func29.2.2(0xfffe) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:74 +0x29a k8s.io/kubernetes/test/e2e/common.glob..func29.2.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:84 +0x2e k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000703c80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc000703c80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc000703c80, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "security-context-test-551". STEP: Found 7 events. May 4 16:31:48.163: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0: { } Scheduled: Successfully assigned security-context-test-551/busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0 to node1 May 4 16:31:48.163: INFO: At 2021-05-04 16:26:49 +0000 UTC - event for busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0: {multus } AddedInterface: Add eth0 [10.244.4.211/24] May 4 16:31:48.163: INFO: At 2021-05-04 16:26:49 +0000 UTC - event for busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 4 16:31:48.163: INFO: At 2021-05-04 16:26:50 +0000 UTC - event for busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 4 16:31:48.163: INFO: At 2021-05-04 16:26:50 +0000 UTC - event for busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0: {kubelet node1} Failed: Error: ErrImagePull May 4 16:31:48.163: INFO: At 2021-05-04 16:26:50 +0000 UTC - event for busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 4 16:31:48.163: INFO: At 2021-05-04 16:26:50 +0000 UTC - event for busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0: {kubelet node1} Failed: Error: ImagePullBackOff May 4 16:31:48.165: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:31:48.166: INFO: busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:26:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:26:47 +0000 UTC ContainersNotReady containers with unready status: [busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:26:47 +0000 UTC ContainersNotReady containers with unready status: [busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:26:47 +0000 UTC }] May 4 16:31:48.166: INFO: May 4 16:31:48.171: INFO: Logging node info for node master1 May 4 16:31:48.173: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 db982204-549e-4532-90a7-a4410878cfc9 45108 0 2021-05-04 14:43:01 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"3e:f0:43:cb:66:52"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-04 14:51:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:46 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:31:41 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:31:41 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:31:41 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:31:41 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:88a0771919594d4187f6704fc7592bf8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8e0a253b-2aa4-4467-879e-567e7ba1ffa4,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:31:48.175: INFO: Logging kubelet events for node master1 May 4 16:31:48.177: INFO: Logging pods the kubelet thinks is on node master1 May 4 16:31:48.188: INFO: kube-apiserver-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.188: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:31:48.188: INFO: kube-controller-manager-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.188: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:31:48.188: INFO: kube-proxy-8j6ch started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.188: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:31:48.188: INFO: docker-registry-docker-registry-56cbc7bc58-zhf8t started at 2021-05-04 14:48:42 +0000 UTC (0+2 container statuses recorded) May 4 16:31:48.188: INFO: Container docker-registry ready: true, restart count 0 May 4 16:31:48.188: INFO: Container nginx ready: true, restart count 0 May 4 16:31:48.188: INFO: node-exporter-jckjs started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:31:48.188: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:31:48.188: INFO: Container node-exporter ready: true, restart count 0 May 4 16:31:48.188: INFO: kube-scheduler-master1 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.188: INFO: Container kube-scheduler ready: true, restart count 0 May 4 16:31:48.188: INFO: kube-flannel-qspzk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:31:48.188: INFO: Init container install-cni ready: true, restart count 0 May 4 16:31:48.188: INFO: Container kube-flannel ready: true, restart count 3 May 4 16:31:48.188: INFO: kube-multus-ds-amd64-jflvf started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.188: INFO: Container kube-multus ready: true, restart count 1 May 4 16:31:48.188: INFO: coredns-7677f9bb54-qvcd2 started at 2021-05-04 14:46:11 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.188: INFO: Container coredns ready: true, restart count 1 May 4 16:31:48.188: INFO: node-feature-discovery-controller-5bf5c49849-72rn6 started at 2021-05-04 14:51:52 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.188: INFO: Container nfd-controller ready: true, restart count 0 W0504 16:31:48.204083 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:31:48.231: INFO: Latency metrics for node master1 May 4 16:31:48.231: INFO: Logging node info for node master2 May 4 16:31:48.234: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 e2c15170-247b-4e7b-b818-abc807948bf8 45107 0 2021-05-04 14:43:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:e0:10:a0:e0:62"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:32 +0000 UTC,LastTransitionTime:2021-05-04 14:47:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:31:41 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:31:41 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:31:41 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:31:41 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6af568f56589422a9bd68e0270ce0f8c,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:bf27bb77-fad2-4b52-85c3-acb5113fc512,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:31:48.234: INFO: Logging kubelet events for node master2 May 4 16:31:48.237: INFO: Logging pods the kubelet thinks is on node master2 May 4 16:31:48.246: INFO: kube-multus-ds-amd64-dw8tg started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.246: INFO: Container kube-multus ready: true, restart count 1 May 4 16:31:48.246: INFO: dns-autoscaler-5b7b5c9b6f-zbrsq started at 2021-05-04 14:46:08 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.246: INFO: Container autoscaler ready: true, restart count 1 May 4 16:31:48.246: INFO: node-exporter-9c6qf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:31:48.246: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:31:48.246: INFO: Container node-exporter ready: true, restart count 0 May 4 16:31:48.246: INFO: kube-apiserver-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.246: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:31:48.246: INFO: kube-controller-manager-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.246: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:31:48.246: INFO: kube-scheduler-master2 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.246: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:31:48.246: INFO: kube-proxy-6b5t8 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.246: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:31:48.246: INFO: kube-flannel-cxdfr started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:31:48.246: INFO: Init container install-cni ready: true, restart count 0 May 4 16:31:48.246: INFO: Container kube-flannel ready: true, restart count 1 W0504 16:31:48.260340 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:31:48.291: INFO: Latency metrics for node master2 May 4 16:31:48.291: INFO: Logging node info for node master3 May 4 16:31:48.293: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 b533a646-667e-403c-944b-71dec9cc4851 45106 0 2021-05-04 14:43:51 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:b0:53:14:f6:c9"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:48:50 +0000 UTC,LastTransitionTime:2021-05-04 14:48:50 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:31:41 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:31:41 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:31:41 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:31:41 +0000 UTC,LastTransitionTime:2021-05-04 14:45:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:959373dcda56494486f0c2bb0bb496cc,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:0714ca81-c21e-40d6-a288-48d597238e54,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:31:48.294: INFO: Logging kubelet events for node master3 May 4 16:31:48.296: INFO: Logging pods the kubelet thinks is on node master3 May 4 16:31:48.304: INFO: kube-controller-manager-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.304: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:31:48.304: INFO: kube-scheduler-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.304: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:31:48.304: INFO: kube-proxy-2p5b6 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.304: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:31:48.304: INFO: kube-flannel-wznt8 started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:31:48.304: INFO: Init container install-cni ready: true, restart count 0 May 4 16:31:48.304: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:31:48.304: INFO: kube-multus-ds-amd64-cgwz2 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.304: INFO: Container kube-multus ready: true, restart count 1 May 4 16:31:48.304: INFO: coredns-7677f9bb54-pshfb started at 2021-05-04 14:46:06 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.304: INFO: Container coredns ready: true, restart count 1 May 4 16:31:48.304: INFO: node-exporter-wvppn started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:31:48.304: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:31:48.304: INFO: Container node-exporter ready: true, restart count 0 May 4 16:31:48.304: INFO: kube-apiserver-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.304: INFO: Container kube-apiserver ready: true, restart count 0 W0504 16:31:48.320030 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:31:48.351: INFO: Latency metrics for node master3 May 4 16:31:48.351: INFO: Logging node info for node node1 May 4 16:31:48.353: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 b8cf7e16-d5c7-4e2c-996a-93d93bd4fa1c 45146 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"e2:50:df:03:d2:13"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:54:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:31:47 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:31:47 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:31:47 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:31:47 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bffc023a4ab84df0b0181bc7b8f509e2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:dc08af72-abca-4f1d-bd0f-0e8d8eb97de5,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[@ :],SizeBytes:1002569035,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:371dc6bf7e0c7ce112a29341b000c40d840aef1dbb4fdcb3ae5c0597e28f3061 golang:alpine3.12],SizeBytes:301097267,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:31:48.354: INFO: Logging kubelet events for node node1 May 4 16:31:48.356: INFO: Logging pods the kubelet thinks is on node node1 May 4 16:31:48.371: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.371: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 16:31:48.371: INFO: cmk-slg76 started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:31:48.371: INFO: Container nodereport ready: true, restart count 0 May 4 16:31:48.371: INFO: Container reconcile ready: true, restart count 0 May 4 16:31:48.371: INFO: prometheus-k8s-0 started at 2021-05-04 14:56:12 +0000 UTC (0+5 container statuses recorded) May 4 16:31:48.371: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:31:48.371: INFO: Container grafana ready: true, restart count 0 May 4 16:31:48.371: INFO: Container prometheus ready: true, restart count 1 May 4 16:31:48.371: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:31:48.371: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 4 16:31:48.371: INFO: kube-flannel-d6pbl started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:31:48.371: INFO: Init container install-cni ready: true, restart count 2 May 4 16:31:48.371: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:31:48.371: INFO: node-feature-discovery-worker-wfgl5 started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.371: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:31:48.371: INFO: prometheus-operator-5bb8cb9d8f-rrrhf started at 2021-05-04 14:56:03 +0000 UTC (0+2 container statuses recorded) May 4 16:31:48.371: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:31:48.371: INFO: Container prometheus-operator ready: true, restart count 0 May 4 16:31:48.371: INFO: node-exporter-k8qd9 started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:31:48.371: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:31:48.371: INFO: Container node-exporter ready: true, restart count 0 May 4 16:31:48.371: INFO: collectd-4755t started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:31:48.371: INFO: Container collectd ready: true, restart count 0 May 4 16:31:48.371: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:31:48.371: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:31:48.371: INFO: fail-once-local-ltx4r started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.371: INFO: Container c ready: false, restart count 0 May 4 16:31:48.371: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.371: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:31:48.371: INFO: busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0 started at 2021-05-04 16:26:47 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.371: INFO: Container busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0 ready: false, restart count 0 May 4 16:31:48.371: INFO: nginx-proxy-node1 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.371: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:31:48.371: INFO: kube-proxy-t2mbn started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.371: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:31:48.371: INFO: liveness-http started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.371: INFO: Container liveness-http ready: false, restart count 21 May 4 16:31:48.371: INFO: cmk-init-discover-node1-m8vvw started at 2021-05-04 14:54:32 +0000 UTC (0+3 container statuses recorded) May 4 16:31:48.371: INFO: Container discover ready: false, restart count 0 May 4 16:31:48.371: INFO: Container init ready: false, restart count 0 May 4 16:31:48.371: INFO: Container install ready: false, restart count 0 May 4 16:31:48.371: INFO: kube-multus-ds-amd64-pkmbz started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.371: INFO: Container kube-multus ready: true, restart count 1 W0504 16:31:48.385645 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:31:48.436: INFO: Latency metrics for node node1 May 4 16:31:48.436: INFO: Logging node info for node node2 May 4 16:31:48.439: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 41567fa7-bb24-4381-9387-e4115195037d 45120 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:98:f5:3b:98:5c"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:31:44 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:31:44 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:31:44 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:31:44 +0000 UTC,LastTransitionTime:2021-05-04 14:45:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d47a1c7ab17f44f2ae7ff788700a8d74,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:df3523a1-a74f-4f8b-beb1-29f5ed8699f3,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:31:48.440: INFO: Logging kubelet events for node node2 May 4 16:31:48.443: INFO: Logging pods the kubelet thinks is on node node2 May 4 16:31:48.457: INFO: kube-multus-ds-amd64-7r2s4 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.457: INFO: Container kube-multus ready: true, restart count 1 May 4 16:31:48.457: INFO: cmk-init-discover-node2-zlxzj started at 2021-05-04 14:54:52 +0000 UTC (0+3 container statuses recorded) May 4 16:31:48.457: INFO: Container discover ready: false, restart count 0 May 4 16:31:48.457: INFO: Container init ready: false, restart count 0 May 4 16:31:48.457: INFO: Container install ready: false, restart count 0 May 4 16:31:48.457: INFO: collectd-dhwfp started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:31:48.457: INFO: Container collectd ready: true, restart count 0 May 4 16:31:48.457: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:31:48.457: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:31:48.457: INFO: fail-once-local-bkr6m started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.457: INFO: Container c ready: false, restart count 0 May 4 16:31:48.457: INFO: foo-sxtvr started at 2021-05-04 16:25:24 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.457: INFO: Container c ready: false, restart count 0 May 4 16:31:48.457: INFO: downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813 started at 2021-05-04 16:27:45 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.457: INFO: Container dapi-container ready: false, restart count 0 May 4 16:31:48.458: INFO: nginx-proxy-node2 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.458: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:31:48.458: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.458: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 4 16:31:48.458: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.458: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:31:48.458: INFO: cmk-2fmbx started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:31:48.458: INFO: Container nodereport ready: true, restart count 0 May 4 16:31:48.458: INFO: Container reconcile ready: true, restart count 0 May 4 16:31:48.458: INFO: kube-flannel-lnwkk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:31:48.458: INFO: Init container install-cni ready: true, restart count 2 May 4 16:31:48.458: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:31:48.458: INFO: cmk-webhook-6c9d5f8578-fr595 started at 2021-05-04 14:55:15 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.458: INFO: Container cmk-webhook ready: true, restart count 0 May 4 16:31:48.458: INFO: kube-proxy-rfjjf started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.458: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:31:48.458: INFO: node-feature-discovery-worker-jzjqs started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.458: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:31:48.458: INFO: node-exporter-5lghf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:31:48.458: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:31:48.458: INFO: Container node-exporter ready: true, restart count 0 May 4 16:31:48.458: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x started at 2021-05-04 14:59:02 +0000 UTC (0+2 container statuses recorded) May 4 16:31:48.458: INFO: Container tas-controller ready: true, restart count 0 May 4 16:31:48.458: INFO: Container tas-extender ready: true, restart count 0 May 4 16:31:48.458: INFO: foo-9dkvq started at 2021-05-04 16:25:24 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.458: INFO: Container c ready: false, restart count 0 May 4 16:31:48.458: INFO: liveness-exec started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:31:48.458: INFO: Container liveness-exec ready: false, restart count 6 W0504 16:31:48.469858 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:31:48.499: INFO: Latency metrics for node node2 May 4 16:31:48.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-551" for this suite. • Failure [300.948 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a container with runAsUser /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:31:48.157: wait for pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0" to succeed Expected success, but got an error: <*errors.errorString | 0xc003c051b0>: { s: "Gave up after waiting 5m0s for pod \"busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0\" to be \"Succeeded or Failed\"", } Gave up after waiting 5m0s for pod "busybox-user-65534-68ff21a5-e8a6-436a-b96d-8961d5f7a8a0" to be "Succeeded or Failed" /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:212 ------------------------------ {"msg":"FAILED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":799,"failed":3,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","[k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]"]} May 4 16:31:48.516: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:27:45.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 4 16:27:45.067: INFO: Waiting up to 5m0s for pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813" in namespace "downward-api-5475" to be "Succeeded or Failed" May 4 16:27:45.069: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 2.307707ms May 4 16:27:47.073: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00632184s May 4 16:27:49.076: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009120486s May 4 16:27:51.080: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013475096s May 4 16:27:53.083: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016211342s May 4 16:27:55.086: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 10.018973092s May 4 16:27:57.089: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 12.021637947s May 4 16:27:59.091: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 14.02426817s May 4 16:28:01.094: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 16.027551503s May 4 16:28:03.097: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 18.029869335s May 4 16:28:05.101: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 20.03429745s May 4 16:28:07.104: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 22.037009702s May 4 16:28:09.108: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 24.041070307s May 4 16:28:11.111: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 26.043986212s May 4 16:28:13.114: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 28.046690633s May 4 16:28:15.116: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 30.049420999s May 4 16:28:17.120: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 32.053121036s May 4 16:28:19.124: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 34.056632328s May 4 16:28:21.127: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 36.059733081s May 4 16:28:23.130: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 38.063419244s May 4 16:28:25.135: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 40.068168692s May 4 16:28:27.138: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 42.07101433s May 4 16:28:29.141: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 44.073823668s May 4 16:28:31.144: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 46.077048834s May 4 16:28:33.147: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 48.080034829s May 4 16:28:35.151: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 50.083579325s May 4 16:28:37.154: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 52.086826079s May 4 16:28:39.157: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 54.090514527s May 4 16:28:41.160: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 56.09350108s May 4 16:28:43.163: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 58.096305049s May 4 16:28:45.168: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.101474008s May 4 16:28:47.172: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.105389334s May 4 16:28:49.176: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.10933221s May 4 16:28:51.179: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.112150855s May 4 16:28:53.184: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.11673172s May 4 16:28:55.186: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.11951969s May 4 16:28:57.189: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.122412124s May 4 16:28:59.193: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.125628785s May 4 16:29:01.196: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.129522216s May 4 16:29:03.199: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.132230411s May 4 16:29:05.203: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.136001442s May 4 16:29:07.206: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.138715705s May 4 16:29:09.210: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.142556731s May 4 16:29:11.212: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.14550473s May 4 16:29:13.216: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.149162764s May 4 16:29:15.219: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.151857029s May 4 16:29:17.222: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.155489357s May 4 16:29:19.226: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.158732915s May 4 16:29:21.229: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.16220996s May 4 16:29:23.234: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.166566849s May 4 16:29:25.237: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.17037562s May 4 16:29:27.241: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.174413608s May 4 16:29:29.246: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.17925575s May 4 16:29:31.249: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.182432969s May 4 16:29:33.252: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.185466387s May 4 16:29:35.256: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.188570058s May 4 16:29:37.259: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.192289647s May 4 16:29:39.262: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.195336189s May 4 16:29:41.265: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.19827377s May 4 16:29:43.270: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.202753933s May 4 16:29:45.274: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.207220352s May 4 16:29:47.277: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.209885177s May 4 16:29:49.280: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.213451982s May 4 16:29:51.284: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.216722329s May 4 16:29:53.287: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.220551138s May 4 16:29:55.290: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.223436689s May 4 16:29:57.293: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.226341026s May 4 16:29:59.296: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.229316176s May 4 16:30:01.300: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.232998032s May 4 16:30:03.305: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.238030167s May 4 16:30:05.309: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.242233176s May 4 16:30:07.314: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.246748705s May 4 16:30:09.317: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.2505164s May 4 16:30:11.321: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.254329421s May 4 16:30:13.326: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.258675694s May 4 16:30:15.329: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.262276267s May 4 16:30:17.333: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.265675372s May 4 16:30:19.337: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.270333557s May 4 16:30:21.340: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.273411277s May 4 16:30:23.345: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.278477677s May 4 16:30:25.349: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.282208638s May 4 16:30:27.352: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.284959652s May 4 16:30:29.356: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.289319186s May 4 16:30:31.361: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.293596311s May 4 16:30:33.364: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.296863561s May 4 16:30:35.367: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.300532506s May 4 16:30:37.371: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.304282516s May 4 16:30:39.376: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.308918703s May 4 16:30:41.380: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.31267925s May 4 16:30:43.383: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.315622197s May 4 16:30:45.387: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.320314945s May 4 16:30:47.391: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.324481028s May 4 16:30:49.395: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.328233782s May 4 16:30:51.399: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.332060221s May 4 16:30:53.402: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.334959475s May 4 16:30:55.405: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.338487859s May 4 16:30:57.408: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.341515695s May 4 16:30:59.411: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.343976128s May 4 16:31:01.415: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.347780869s May 4 16:31:03.420: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.353289172s May 4 16:31:05.424: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.357448744s May 4 16:31:07.428: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.36059198s May 4 16:31:09.432: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.36503817s May 4 16:31:11.436: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.369516972s May 4 16:31:13.441: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.374302141s May 4 16:31:15.447: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.37981145s May 4 16:31:17.450: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.383050028s May 4 16:31:19.459: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.392514203s May 4 16:31:21.463: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.395775496s May 4 16:31:23.469: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.402252349s May 4 16:31:25.474: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.406581049s May 4 16:31:27.477: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.409936628s May 4 16:31:29.481: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.413921794s May 4 16:31:31.485: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.41850797s May 4 16:31:33.490: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.422580888s May 4 16:31:35.494: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.426701144s May 4 16:31:37.499: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.432246706s May 4 16:31:39.503: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.435924061s May 4 16:31:41.507: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.440056702s May 4 16:31:43.511: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.444363088s May 4 16:31:45.515: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.447664064s May 4 16:31:47.518: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.451505826s May 4 16:31:49.523: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.455867846s May 4 16:31:51.528: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.460566116s May 4 16:31:53.531: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.46379388s May 4 16:31:55.536: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.468866902s May 4 16:31:57.540: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.472852251s May 4 16:31:59.545: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.478034668s May 4 16:32:01.549: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.482265728s May 4 16:32:03.557: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.489812997s May 4 16:32:05.561: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.493689945s May 4 16:32:07.564: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.497145031s May 4 16:32:09.567: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.50004043s May 4 16:32:11.571: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.503600897s May 4 16:32:13.575: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.507821325s May 4 16:32:15.578: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.510854609s May 4 16:32:17.582: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.514572839s May 4 16:32:19.585: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.517726255s May 4 16:32:21.588: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.520778885s May 4 16:32:23.592: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.524578656s May 4 16:32:25.595: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.528016421s May 4 16:32:27.598: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.530984074s May 4 16:32:29.602: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.534761304s May 4 16:32:31.607: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.539816947s May 4 16:32:33.611: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.54361082s May 4 16:32:35.615: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.547807482s May 4 16:32:37.618: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.55084409s May 4 16:32:39.624: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.556587138s May 4 16:32:41.627: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.56048145s May 4 16:32:43.634: INFO: Pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.566761549s May 4 16:32:45.643: INFO: Failed to get logs from node "node2" pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813" container "dapi-container": the server rejected our request for an unknown reason (get pods downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813) STEP: delete the pod May 4 16:32:45.649: INFO: Waiting for pod downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813 to disappear May 4 16:32:45.651: INFO: Pod downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813 still exists May 4 16:32:47.652: INFO: Waiting for pod downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813 to disappear May 4 16:32:47.655: INFO: Pod downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813 no longer exists May 4 16:32:47.656: FAIL: Unexpected error: <*errors.errorString | 0xc006735c30>: { s: "expected pod \"downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813\" success: Gave up after waiting 5m0s for pod \"downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813\" to be \"Succeeded or Failed\"", } expected pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813" success: Gave up after waiting 5m0s for pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813" to be "Succeeded or Failed" occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc00140ab00, 0x4c29f00, 0x15, 0xc002a01c00, 0x0, 0xc0043e31a8, 0x3, 0x3, 0x4de7490) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725 +0x1ee k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutputRegexp(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:532 k8s.io/kubernetes/test/e2e/common.testDownwardAPIUsingPod(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:425 k8s.io/kubernetes/test/e2e/common.testDownwardAPI(0xc00140ab00, 0xc0041dd9c0, 0x31, 0xc000e0e300, 0x3, 0x3, 0xc0043e31a8, 0x3, 0x3) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:391 +0x75c k8s.io/kubernetes/test/e2e/common.glob..func5.1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:80 +0x4e8 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001568300) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc001568300) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc001568300, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "downward-api-5475". STEP: Found 10 events. May 4 16:32:47.661: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813: { } Scheduled: Successfully assigned downward-api-5475/downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813 to node2 May 4 16:32:47.661: INFO: At 2021-05-04 16:27:46 +0000 UTC - event for downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813: {multus } AddedInterface: Add eth0 [10.244.3.14/24] May 4 16:32:47.661: INFO: At 2021-05-04 16:27:46 +0000 UTC - event for downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813: {kubelet node2} Pulling: Pulling image "docker.io/library/busybox:1.29" May 4 16:32:47.661: INFO: At 2021-05-04 16:27:47 +0000 UTC - event for downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813: {kubelet node2} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 4 16:32:47.661: INFO: At 2021-05-04 16:27:47 +0000 UTC - event for downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813: {kubelet node2} Failed: Error: ErrImagePull May 4 16:32:47.661: INFO: At 2021-05-04 16:27:48 +0000 UTC - event for downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813: {kubelet node2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 4 16:32:47.661: INFO: At 2021-05-04 16:27:53 +0000 UTC - event for downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813: {multus } AddedInterface: Add eth0 [10.244.3.19/24] May 4 16:32:47.661: INFO: At 2021-05-04 16:27:53 +0000 UTC - event for downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813: {kubelet node2} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 4 16:32:47.661: INFO: At 2021-05-04 16:27:53 +0000 UTC - event for downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813: {kubelet node2} Failed: Error: ImagePullBackOff May 4 16:32:47.661: INFO: At 2021-05-04 16:27:56 +0000 UTC - event for downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813: {multus } AddedInterface: Add eth0 [10.244.3.20/24] May 4 16:32:47.663: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:32:47.664: INFO: May 4 16:32:47.668: INFO: Logging node info for node master1 May 4 16:32:47.671: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 db982204-549e-4532-90a7-a4410878cfc9 45356 0 2021-05-04 14:43:01 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"3e:f0:43:cb:66:52"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-04 14:51:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:46 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:32:42 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:32:42 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:32:42 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:32:42 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:88a0771919594d4187f6704fc7592bf8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8e0a253b-2aa4-4467-879e-567e7ba1ffa4,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:32:47.672: INFO: Logging kubelet events for node master1 May 4 16:32:47.674: INFO: Logging pods the kubelet thinks is on node master1 May 4 16:32:47.684: INFO: kube-scheduler-master1 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:32:47.684: INFO: Container kube-scheduler ready: true, restart count 0 May 4 16:32:47.684: INFO: kube-apiserver-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:32:47.684: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:32:47.684: INFO: kube-controller-manager-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:32:47.684: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:32:47.684: INFO: kube-proxy-8j6ch started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:32:47.684: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:32:47.684: INFO: docker-registry-docker-registry-56cbc7bc58-zhf8t started at 2021-05-04 14:48:42 +0000 UTC (0+2 container statuses recorded) May 4 16:32:47.684: INFO: Container docker-registry ready: true, restart count 0 May 4 16:32:47.684: INFO: Container nginx ready: true, restart count 0 May 4 16:32:47.684: INFO: node-exporter-jckjs started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:32:47.684: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:32:47.684: INFO: Container node-exporter ready: true, restart count 0 May 4 16:32:47.684: INFO: kube-flannel-qspzk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:32:47.684: INFO: Init container install-cni ready: true, restart count 0 May 4 16:32:47.684: INFO: Container kube-flannel ready: true, restart count 3 May 4 16:32:47.684: INFO: kube-multus-ds-amd64-jflvf started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:32:47.684: INFO: Container kube-multus ready: true, restart count 1 May 4 16:32:47.684: INFO: coredns-7677f9bb54-qvcd2 started at 2021-05-04 14:46:11 +0000 UTC (0+1 container statuses recorded) May 4 16:32:47.684: INFO: Container coredns ready: true, restart count 1 May 4 16:32:47.684: INFO: node-feature-discovery-controller-5bf5c49849-72rn6 started at 2021-05-04 14:51:52 +0000 UTC (0+1 container statuses recorded) May 4 16:32:47.684: INFO: Container nfd-controller ready: true, restart count 0 W0504 16:32:47.698191 30 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:32:47.723: INFO: Latency metrics for node master1 May 4 16:32:47.723: INFO: Logging node info for node master2 May 4 16:32:47.726: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 e2c15170-247b-4e7b-b818-abc807948bf8 45355 0 2021-05-04 14:43:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:e0:10:a0:e0:62"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:32 +0000 UTC,LastTransitionTime:2021-05-04 14:47:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:32:41 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:32:41 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:32:41 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:32:41 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6af568f56589422a9bd68e0270ce0f8c,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:bf27bb77-fad2-4b52-85c3-acb5113fc512,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:32:47.726: INFO: Logging kubelet events for node master2 May 4 16:32:47.729: INFO: Logging pods the kubelet thinks is on node master2 May 4 16:32:47.737: INFO: kube-flannel-cxdfr started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:32:47.737: INFO: Init container install-cni ready: true, restart count 0 May 4 16:32:47.737: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:32:47.737: INFO: kube-multus-ds-amd64-dw8tg started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:32:47.737: INFO: Container kube-multus ready: true, restart count 1 May 4 16:32:47.737: INFO: dns-autoscaler-5b7b5c9b6f-zbrsq started at 2021-05-04 14:46:08 +0000 UTC (0+1 container statuses recorded) May 4 16:32:47.737: INFO: Container autoscaler ready: true, restart count 1 May 4 16:32:47.737: INFO: node-exporter-9c6qf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:32:47.737: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:32:47.737: INFO: Container node-exporter ready: true, restart count 0 May 4 16:32:47.737: INFO: kube-apiserver-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:32:47.737: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:32:47.737: INFO: kube-controller-manager-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:32:47.737: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:32:47.737: INFO: kube-scheduler-master2 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:32:47.737: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:32:47.737: INFO: kube-proxy-6b5t8 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:32:47.737: INFO: Container kube-proxy ready: true, restart count 2 W0504 16:32:47.752615 30 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:32:47.783: INFO: Latency metrics for node master2 May 4 16:32:47.783: INFO: Logging node info for node master3 May 4 16:32:47.785: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 b533a646-667e-403c-944b-71dec9cc4851 45354 0 2021-05-04 14:43:51 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:b0:53:14:f6:c9"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:48:50 +0000 UTC,LastTransitionTime:2021-05-04 14:48:50 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:32:41 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:32:41 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:32:41 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:32:41 +0000 UTC,LastTransitionTime:2021-05-04 14:45:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:959373dcda56494486f0c2bb0bb496cc,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:0714ca81-c21e-40d6-a288-48d597238e54,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:32:47.786: INFO: Logging kubelet events for node master3 May 4 16:32:47.788: INFO: Logging pods the kubelet thinks is on node master3 May 4 16:32:47.798: INFO: kube-scheduler-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:32:47.798: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:32:47.798: INFO: kube-proxy-2p5b6 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:32:47.798: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:32:47.798: INFO: kube-flannel-wznt8 started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:32:47.798: INFO: Init container install-cni ready: true, restart count 0 May 4 16:32:47.798: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:32:47.798: INFO: kube-multus-ds-amd64-cgwz2 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:32:47.798: INFO: Container kube-multus ready: true, restart count 1 May 4 16:32:47.798: INFO: coredns-7677f9bb54-pshfb started at 2021-05-04 14:46:06 +0000 UTC (0+1 container statuses recorded) May 4 16:32:47.798: INFO: Container coredns ready: true, restart count 1 May 4 16:32:47.798: INFO: node-exporter-wvppn started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:32:47.798: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:32:47.798: INFO: Container node-exporter ready: true, restart count 0 May 4 16:32:47.798: INFO: kube-apiserver-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:32:47.798: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:32:47.798: INFO: kube-controller-manager-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:32:47.798: INFO: Container kube-controller-manager ready: true, restart count 2 W0504 16:32:47.813925 30 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:32:47.838: INFO: Latency metrics for node master3 May 4 16:32:47.838: INFO: Logging node info for node node1 May 4 16:32:47.841: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 b8cf7e16-d5c7-4e2c-996a-93d93bd4fa1c 45340 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"e2:50:df:03:d2:13"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:54:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:32:37 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:32:37 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:32:37 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:32:37 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bffc023a4ab84df0b0181bc7b8f509e2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:dc08af72-abca-4f1d-bd0f-0e8d8eb97de5,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[@ :],SizeBytes:1002569035,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:371dc6bf7e0c7ce112a29341b000c40d840aef1dbb4fdcb3ae5c0597e28f3061 golang:alpine3.12],SizeBytes:301097267,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:32:47.842: INFO: Logging kubelet events for node node1 May 4 16:32:47.845: INFO: Logging pods the kubelet thinks is on node node1 May 4 16:32:47.859: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:32:47.859: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:32:47.859: INFO: nginx-proxy-node1 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:32:47.859: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:32:47.859: INFO: kube-proxy-t2mbn started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:32:47.859: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:32:47.859: INFO: liveness-http started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:32:47.859: INFO: Container liveness-http ready: false, restart count 21 May 4 16:32:47.859: INFO: cmk-init-discover-node1-m8vvw started at 2021-05-04 14:54:32 +0000 UTC (0+3 container statuses recorded) May 4 16:32:47.859: INFO: Container discover ready: false, restart count 0 May 4 16:32:47.859: INFO: Container init ready: false, restart count 0 May 4 16:32:47.859: INFO: Container install ready: false, restart count 0 May 4 16:32:47.859: INFO: kube-multus-ds-amd64-pkmbz started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:32:47.859: INFO: Container kube-multus ready: true, restart count 1 May 4 16:32:47.859: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:32:47.859: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 16:32:47.859: INFO: cmk-slg76 started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:32:47.859: INFO: Container nodereport ready: true, restart count 0 May 4 16:32:47.859: INFO: Container reconcile ready: true, restart count 0 May 4 16:32:47.859: INFO: prometheus-k8s-0 started at 2021-05-04 14:56:12 +0000 UTC (0+5 container statuses recorded) May 4 16:32:47.859: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:32:47.859: INFO: Container grafana ready: true, restart count 0 May 4 16:32:47.859: INFO: Container prometheus ready: true, restart count 1 May 4 16:32:47.859: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:32:47.859: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 4 16:32:47.859: INFO: kube-flannel-d6pbl started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:32:47.859: INFO: Init container install-cni ready: true, restart count 2 May 4 16:32:47.859: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:32:47.859: INFO: node-feature-discovery-worker-wfgl5 started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:32:47.859: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:32:47.859: INFO: prometheus-operator-5bb8cb9d8f-rrrhf started at 2021-05-04 14:56:03 +0000 UTC (0+2 container statuses recorded) May 4 16:32:47.859: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:32:47.859: INFO: Container prometheus-operator ready: true, restart count 0 May 4 16:32:47.859: INFO: node-exporter-k8qd9 started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:32:47.859: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:32:47.859: INFO: Container node-exporter ready: true, restart count 0 May 4 16:32:47.859: INFO: collectd-4755t started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:32:47.859: INFO: Container collectd ready: true, restart count 0 May 4 16:32:47.859: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:32:47.859: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:32:47.859: INFO: fail-once-local-ltx4r started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:32:47.859: INFO: Container c ready: false, restart count 0 W0504 16:32:47.870962 30 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:32:47.906: INFO: Latency metrics for node node1 May 4 16:32:47.906: INFO: Logging node info for node node2 May 4 16:32:47.909: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 41567fa7-bb24-4381-9387-e4115195037d 45366 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:98:f5:3b:98:5c"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:32:44 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:32:44 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:32:44 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:32:44 +0000 UTC,LastTransitionTime:2021-05-04 14:45:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d47a1c7ab17f44f2ae7ff788700a8d74,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:df3523a1-a74f-4f8b-beb1-29f5ed8699f3,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:32:47.909: INFO: Logging kubelet events for node node2 May 4 16:32:47.912: INFO: Logging pods the kubelet thinks is on node node2 May 4 16:32:47.929: INFO: foo-9dkvq started at 2021-05-04 16:25:24 +0000 UTC (0+1 container statuses recorded) May 4 16:32:47.929: INFO: Container c ready: false, restart count 0 May 4 16:32:47.929: INFO: liveness-exec started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:32:47.929: INFO: Container liveness-exec ready: false, restart count 6 May 4 16:32:47.929: INFO: kube-proxy-rfjjf started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:32:47.929: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:32:47.929: INFO: node-feature-discovery-worker-jzjqs started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:32:47.929: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:32:47.929: INFO: node-exporter-5lghf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:32:47.929: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:32:47.929: INFO: Container node-exporter ready: true, restart count 0 May 4 16:32:47.929: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x started at 2021-05-04 14:59:02 +0000 UTC (0+2 container statuses recorded) May 4 16:32:47.929: INFO: Container tas-controller ready: true, restart count 0 May 4 16:32:47.929: INFO: Container tas-extender ready: true, restart count 0 May 4 16:32:47.929: INFO: kube-multus-ds-amd64-7r2s4 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:32:47.929: INFO: Container kube-multus ready: true, restart count 1 May 4 16:32:47.929: INFO: cmk-init-discover-node2-zlxzj started at 2021-05-04 14:54:52 +0000 UTC (0+3 container statuses recorded) May 4 16:32:47.929: INFO: Container discover ready: false, restart count 0 May 4 16:32:47.929: INFO: Container init ready: false, restart count 0 May 4 16:32:47.929: INFO: Container install ready: false, restart count 0 May 4 16:32:47.929: INFO: collectd-dhwfp started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:32:47.929: INFO: Container collectd ready: true, restart count 0 May 4 16:32:47.929: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:32:47.929: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:32:47.929: INFO: fail-once-local-bkr6m started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:32:47.929: INFO: Container c ready: false, restart count 0 May 4 16:32:47.929: INFO: foo-sxtvr started at 2021-05-04 16:25:24 +0000 UTC (0+1 container statuses recorded) May 4 16:32:47.929: INFO: Container c ready: false, restart count 0 May 4 16:32:47.929: INFO: nginx-proxy-node2 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:32:47.929: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:32:47.929: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:32:47.929: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 4 16:32:47.929: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:32:47.929: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:32:47.929: INFO: cmk-2fmbx started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:32:47.929: INFO: Container nodereport ready: true, restart count 0 May 4 16:32:47.929: INFO: Container reconcile ready: true, restart count 0 May 4 16:32:47.929: INFO: kube-flannel-lnwkk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:32:47.929: INFO: Init container install-cni ready: true, restart count 2 May 4 16:32:47.929: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:32:47.929: INFO: cmk-webhook-6c9d5f8578-fr595 started at 2021-05-04 14:55:15 +0000 UTC (0+1 container statuses recorded) May 4 16:32:47.929: INFO: Container cmk-webhook ready: true, restart count 0 W0504 16:32:47.944190 30 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:32:48.000: INFO: Latency metrics for node node2 May 4 16:32:48.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5475" for this suite. • Failure [302.972 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:32:47.656: Unexpected error: <*errors.errorString | 0xc006735c30>: { s: "expected pod \"downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813\" success: Gave up after waiting 5m0s for pod \"downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813\" to be \"Succeeded or Failed\"", } expected pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813" success: Gave up after waiting 5m0s for pod "downward-api-6bd37b02-f45d-43d9-bbeb-bb1f5f8a4813" to be "Succeeded or Failed" occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725 ------------------------------ {"msg":"FAILED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":645,"failed":3,"failures":["[sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","[k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","[sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]"]} May 4 16:32:48.015: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:18:38.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions May 4 16:33:38.847: FAIL: failed to ensure job completion in namespace: job-361 Unexpected error: <*errors.errorString | 0xc0003001f0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func6.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:113 +0x33f k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000871980) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc000871980) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc000871980, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "job-361". STEP: Found 16 events. May 4 16:33:38.851: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for fail-once-local-bkr6m: { } Scheduled: Successfully assigned job-361/fail-once-local-bkr6m to node2 May 4 16:33:38.851: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for fail-once-local-ltx4r: { } Scheduled: Successfully assigned job-361/fail-once-local-ltx4r to node1 May 4 16:33:38.851: INFO: At 2021-05-04 16:18:38 +0000 UTC - event for fail-once-local: {job-controller } SuccessfulCreate: Created pod: fail-once-local-bkr6m May 4 16:33:38.851: INFO: At 2021-05-04 16:18:38 +0000 UTC - event for fail-once-local: {job-controller } SuccessfulCreate: Created pod: fail-once-local-ltx4r May 4 16:33:38.851: INFO: At 2021-05-04 16:18:42 +0000 UTC - event for fail-once-local-bkr6m: {kubelet node2} Pulling: Pulling image "docker.io/library/busybox:1.29" May 4 16:33:38.851: INFO: At 2021-05-04 16:18:42 +0000 UTC - event for fail-once-local-bkr6m: {multus } AddedInterface: Add eth0 [10.244.3.227/24] May 4 16:33:38.851: INFO: At 2021-05-04 16:18:42 +0000 UTC - event for fail-once-local-ltx4r: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 4 16:33:38.851: INFO: At 2021-05-04 16:18:42 +0000 UTC - event for fail-once-local-ltx4r: {multus } AddedInterface: Add eth0 [10.244.4.170/24] May 4 16:33:38.851: INFO: At 2021-05-04 16:18:43 +0000 UTC - event for fail-once-local-ltx4r: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 4 16:33:38.851: INFO: At 2021-05-04 16:18:43 +0000 UTC - event for fail-once-local-ltx4r: {kubelet node1} Failed: Error: ErrImagePull May 4 16:33:38.851: INFO: At 2021-05-04 16:18:43 +0000 UTC - event for fail-once-local-ltx4r: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 4 16:33:38.852: INFO: At 2021-05-04 16:18:43 +0000 UTC - event for fail-once-local-ltx4r: {kubelet node1} Failed: Error: ImagePullBackOff May 4 16:33:38.852: INFO: At 2021-05-04 16:18:44 +0000 UTC - event for fail-once-local-bkr6m: {kubelet node2} Failed: Error: ErrImagePull May 4 16:33:38.852: INFO: At 2021-05-04 16:18:44 +0000 UTC - event for fail-once-local-bkr6m: {kubelet node2} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 4 16:33:38.852: INFO: At 2021-05-04 16:18:45 +0000 UTC - event for fail-once-local-bkr6m: {kubelet node2} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 4 16:33:38.852: INFO: At 2021-05-04 16:18:45 +0000 UTC - event for fail-once-local-bkr6m: {kubelet node2} Failed: Error: ImagePullBackOff May 4 16:33:38.854: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:33:38.854: INFO: fail-once-local-bkr6m node2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:18:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:18:38 +0000 UTC ContainersNotReady containers with unready status: [c]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:18:38 +0000 UTC ContainersNotReady containers with unready status: [c]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:18:38 +0000 UTC }] May 4 16:33:38.854: INFO: fail-once-local-ltx4r node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:18:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:18:38 +0000 UTC ContainersNotReady containers with unready status: [c]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:18:38 +0000 UTC ContainersNotReady containers with unready status: [c]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:18:38 +0000 UTC }] May 4 16:33:38.854: INFO: May 4 16:33:38.858: INFO: Logging node info for node master1 May 4 16:33:38.861: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 db982204-549e-4532-90a7-a4410878cfc9 45551 0 2021-05-04 14:43:01 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"3e:f0:43:cb:66:52"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-04 14:51:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:46 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:33:32 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:33:32 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:33:32 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:33:32 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:88a0771919594d4187f6704fc7592bf8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8e0a253b-2aa4-4467-879e-567e7ba1ffa4,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:33:38.861: INFO: Logging kubelet events for node master1 May 4 16:33:38.864: INFO: Logging pods the kubelet thinks is on node master1 May 4 16:33:38.874: INFO: kube-controller-manager-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:33:38.874: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:33:38.874: INFO: kube-proxy-8j6ch started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:33:38.874: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:33:38.874: INFO: docker-registry-docker-registry-56cbc7bc58-zhf8t started at 2021-05-04 14:48:42 +0000 UTC (0+2 container statuses recorded) May 4 16:33:38.874: INFO: Container docker-registry ready: true, restart count 0 May 4 16:33:38.874: INFO: Container nginx ready: true, restart count 0 May 4 16:33:38.874: INFO: node-exporter-jckjs started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:33:38.874: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:33:38.874: INFO: Container node-exporter ready: true, restart count 0 May 4 16:33:38.874: INFO: kube-scheduler-master1 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:33:38.874: INFO: Container kube-scheduler ready: true, restart count 0 May 4 16:33:38.874: INFO: kube-apiserver-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:33:38.874: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:33:38.874: INFO: kube-flannel-qspzk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:33:38.874: INFO: Init container install-cni ready: true, restart count 0 May 4 16:33:38.874: INFO: Container kube-flannel ready: true, restart count 3 May 4 16:33:38.874: INFO: kube-multus-ds-amd64-jflvf started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:33:38.874: INFO: Container kube-multus ready: true, restart count 1 May 4 16:33:38.874: INFO: coredns-7677f9bb54-qvcd2 started at 2021-05-04 14:46:11 +0000 UTC (0+1 container statuses recorded) May 4 16:33:38.874: INFO: Container coredns ready: true, restart count 1 May 4 16:33:38.874: INFO: node-feature-discovery-controller-5bf5c49849-72rn6 started at 2021-05-04 14:51:52 +0000 UTC (0+1 container statuses recorded) May 4 16:33:38.874: INFO: Container nfd-controller ready: true, restart count 0 W0504 16:33:38.885819 35 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:33:38.911: INFO: Latency metrics for node master1 May 4 16:33:38.911: INFO: Logging node info for node master2 May 4 16:33:38.913: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 e2c15170-247b-4e7b-b818-abc807948bf8 45550 0 2021-05-04 14:43:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:e0:10:a0:e0:62"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:32 +0000 UTC,LastTransitionTime:2021-05-04 14:47:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:33:31 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:33:31 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:33:31 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:33:31 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6af568f56589422a9bd68e0270ce0f8c,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:bf27bb77-fad2-4b52-85c3-acb5113fc512,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:33:38.913: INFO: Logging kubelet events for node master2 May 4 16:33:38.915: INFO: Logging pods the kubelet thinks is on node master2 May 4 16:33:38.922: INFO: kube-proxy-6b5t8 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:33:38.923: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:33:38.923: INFO: kube-flannel-cxdfr started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:33:38.923: INFO: Init container install-cni ready: true, restart count 0 May 4 16:33:38.923: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:33:38.923: INFO: kube-multus-ds-amd64-dw8tg started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:33:38.923: INFO: Container kube-multus ready: true, restart count 1 May 4 16:33:38.923: INFO: dns-autoscaler-5b7b5c9b6f-zbrsq started at 2021-05-04 14:46:08 +0000 UTC (0+1 container statuses recorded) May 4 16:33:38.923: INFO: Container autoscaler ready: true, restart count 1 May 4 16:33:38.923: INFO: node-exporter-9c6qf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:33:38.923: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:33:38.923: INFO: Container node-exporter ready: true, restart count 0 May 4 16:33:38.923: INFO: kube-apiserver-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:33:38.923: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:33:38.923: INFO: kube-controller-manager-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:33:38.923: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:33:38.923: INFO: kube-scheduler-master2 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:33:38.923: INFO: Container kube-scheduler ready: true, restart count 2 W0504 16:33:38.936004 35 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:33:38.964: INFO: Latency metrics for node master2 May 4 16:33:38.964: INFO: Logging node info for node master3 May 4 16:33:38.966: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 b533a646-667e-403c-944b-71dec9cc4851 45549 0 2021-05-04 14:43:51 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:b0:53:14:f6:c9"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:48:50 +0000 UTC,LastTransitionTime:2021-05-04 14:48:50 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:33:31 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:33:31 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:33:31 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:33:31 +0000 UTC,LastTransitionTime:2021-05-04 14:45:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:959373dcda56494486f0c2bb0bb496cc,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:0714ca81-c21e-40d6-a288-48d597238e54,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:33:38.967: INFO: Logging kubelet events for node master3 May 4 16:33:38.969: INFO: Logging pods the kubelet thinks is on node master3 May 4 16:33:38.977: INFO: kube-apiserver-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:33:38.977: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:33:38.977: INFO: kube-controller-manager-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:33:38.978: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:33:38.978: INFO: kube-scheduler-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:33:38.978: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:33:38.978: INFO: kube-proxy-2p5b6 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:33:38.978: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:33:38.978: INFO: kube-flannel-wznt8 started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:33:38.978: INFO: Init container install-cni ready: true, restart count 0 May 4 16:33:38.978: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:33:38.978: INFO: kube-multus-ds-amd64-cgwz2 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:33:38.978: INFO: Container kube-multus ready: true, restart count 1 May 4 16:33:38.978: INFO: coredns-7677f9bb54-pshfb started at 2021-05-04 14:46:06 +0000 UTC (0+1 container statuses recorded) May 4 16:33:38.978: INFO: Container coredns ready: true, restart count 1 May 4 16:33:38.978: INFO: node-exporter-wvppn started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:33:38.978: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:33:38.978: INFO: Container node-exporter ready: true, restart count 0 W0504 16:33:38.990797 35 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:33:39.017: INFO: Latency metrics for node master3 May 4 16:33:39.017: INFO: Logging node info for node node1 May 4 16:33:39.020: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 b8cf7e16-d5c7-4e2c-996a-93d93bd4fa1c 45569 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"e2:50:df:03:d2:13"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:54:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:33:38 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:33:38 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:33:38 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:33:38 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bffc023a4ab84df0b0181bc7b8f509e2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:dc08af72-abca-4f1d-bd0f-0e8d8eb97de5,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[@ :],SizeBytes:1002569035,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:371dc6bf7e0c7ce112a29341b000c40d840aef1dbb4fdcb3ae5c0597e28f3061 golang:alpine3.12],SizeBytes:301097267,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:33:39.021: INFO: Logging kubelet events for node node1 May 4 16:33:39.024: INFO: Logging pods the kubelet thinks is on node node1 May 4 16:33:39.038: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:33:39.038: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 16:33:39.038: INFO: cmk-slg76 started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:33:39.038: INFO: Container nodereport ready: true, restart count 0 May 4 16:33:39.038: INFO: Container reconcile ready: true, restart count 0 May 4 16:33:39.038: INFO: prometheus-k8s-0 started at 2021-05-04 14:56:12 +0000 UTC (0+5 container statuses recorded) May 4 16:33:39.038: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:33:39.038: INFO: Container grafana ready: true, restart count 0 May 4 16:33:39.038: INFO: Container prometheus ready: true, restart count 1 May 4 16:33:39.038: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:33:39.038: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 4 16:33:39.038: INFO: kube-flannel-d6pbl started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:33:39.038: INFO: Init container install-cni ready: true, restart count 2 May 4 16:33:39.038: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:33:39.038: INFO: node-feature-discovery-worker-wfgl5 started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:33:39.038: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:33:39.038: INFO: prometheus-operator-5bb8cb9d8f-rrrhf started at 2021-05-04 14:56:03 +0000 UTC (0+2 container statuses recorded) May 4 16:33:39.038: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:33:39.038: INFO: Container prometheus-operator ready: true, restart count 0 May 4 16:33:39.038: INFO: node-exporter-k8qd9 started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:33:39.038: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:33:39.038: INFO: Container node-exporter ready: true, restart count 0 May 4 16:33:39.038: INFO: collectd-4755t started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:33:39.038: INFO: Container collectd ready: true, restart count 0 May 4 16:33:39.038: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:33:39.038: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:33:39.038: INFO: fail-once-local-ltx4r started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:33:39.038: INFO: Container c ready: false, restart count 0 May 4 16:33:39.038: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:33:39.038: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:33:39.038: INFO: nginx-proxy-node1 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:33:39.038: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:33:39.038: INFO: kube-proxy-t2mbn started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:33:39.038: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:33:39.038: INFO: liveness-http started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:33:39.039: INFO: Container liveness-http ready: true, restart count 22 May 4 16:33:39.039: INFO: cmk-init-discover-node1-m8vvw started at 2021-05-04 14:54:32 +0000 UTC (0+3 container statuses recorded) May 4 16:33:39.039: INFO: Container discover ready: false, restart count 0 May 4 16:33:39.039: INFO: Container init ready: false, restart count 0 May 4 16:33:39.039: INFO: Container install ready: false, restart count 0 May 4 16:33:39.039: INFO: kube-multus-ds-amd64-pkmbz started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:33:39.039: INFO: Container kube-multus ready: true, restart count 1 W0504 16:33:39.049990 35 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:33:39.098: INFO: Latency metrics for node node1 May 4 16:33:39.098: INFO: Logging node info for node node2 May 4 16:33:39.101: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 41567fa7-bb24-4381-9387-e4115195037d 45562 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:98:f5:3b:98:5c"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:33:35 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:33:35 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:33:35 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:33:35 +0000 UTC,LastTransitionTime:2021-05-04 14:45:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d47a1c7ab17f44f2ae7ff788700a8d74,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:df3523a1-a74f-4f8b-beb1-29f5ed8699f3,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:33:39.102: INFO: Logging kubelet events for node node2 May 4 16:33:39.104: INFO: Logging pods the kubelet thinks is on node node2 May 4 16:33:39.117: INFO: kube-multus-ds-amd64-7r2s4 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:33:39.117: INFO: Container kube-multus ready: true, restart count 1 May 4 16:33:39.117: INFO: cmk-init-discover-node2-zlxzj started at 2021-05-04 14:54:52 +0000 UTC (0+3 container statuses recorded) May 4 16:33:39.117: INFO: Container discover ready: false, restart count 0 May 4 16:33:39.117: INFO: Container init ready: false, restart count 0 May 4 16:33:39.117: INFO: Container install ready: false, restart count 0 May 4 16:33:39.117: INFO: collectd-dhwfp started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:33:39.117: INFO: Container collectd ready: true, restart count 0 May 4 16:33:39.117: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:33:39.117: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:33:39.117: INFO: fail-once-local-bkr6m started at 2021-05-04 16:18:38 +0000 UTC (0+1 container statuses recorded) May 4 16:33:39.117: INFO: Container c ready: false, restart count 0 May 4 16:33:39.117: INFO: foo-sxtvr started at 2021-05-04 16:25:24 +0000 UTC (0+1 container statuses recorded) May 4 16:33:39.117: INFO: Container c ready: false, restart count 0 May 4 16:33:39.117: INFO: nginx-proxy-node2 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:33:39.117: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:33:39.117: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:33:39.117: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 4 16:33:39.117: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:33:39.117: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:33:39.117: INFO: cmk-2fmbx started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:33:39.117: INFO: Container nodereport ready: true, restart count 0 May 4 16:33:39.117: INFO: Container reconcile ready: true, restart count 0 May 4 16:33:39.117: INFO: kube-flannel-lnwkk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:33:39.117: INFO: Init container install-cni ready: true, restart count 2 May 4 16:33:39.117: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:33:39.117: INFO: cmk-webhook-6c9d5f8578-fr595 started at 2021-05-04 14:55:15 +0000 UTC (0+1 container statuses recorded) May 4 16:33:39.117: INFO: Container cmk-webhook ready: true, restart count 0 May 4 16:33:39.117: INFO: foo-9dkvq started at 2021-05-04 16:25:24 +0000 UTC (0+1 container statuses recorded) May 4 16:33:39.117: INFO: Container c ready: false, restart count 0 May 4 16:33:39.117: INFO: liveness-exec started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:33:39.117: INFO: Container liveness-exec ready: false, restart count 6 May 4 16:33:39.117: INFO: kube-proxy-rfjjf started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:33:39.117: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:33:39.117: INFO: node-feature-discovery-worker-jzjqs started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:33:39.117: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:33:39.117: INFO: node-exporter-5lghf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:33:39.117: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:33:39.117: INFO: Container node-exporter ready: true, restart count 0 May 4 16:33:39.117: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x started at 2021-05-04 14:59:02 +0000 UTC (0+2 container statuses recorded) May 4 16:33:39.117: INFO: Container tas-controller ready: true, restart count 0 May 4 16:33:39.117: INFO: Container tas-extender ready: true, restart count 0 W0504 16:33:39.130418 35 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:33:39.160: INFO: Latency metrics for node node2 May 4 16:33:39.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-361" for this suite. • Failure [900.356 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:33:38.847: failed to ensure job completion in namespace: job-361 Unexpected error: <*errors.errorString | 0xc0003001f0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:113 ------------------------------ {"msg":"FAILED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":20,"skipped":360,"failed":2,"failures":["[sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","[sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]"]} May 4 16:33:39.176: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:25:24.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism May 4 16:40:24.220: FAIL: failed to ensure active pods == parallelism in namespace: job-5919 Unexpected error: <*errors.errorString | 0xc0003001f0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func6.6() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:163 +0x345 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000179e00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc000179e00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc000179e00, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "job-5919". STEP: Found 16 events. May 4 16:40:24.225: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for foo-9dkvq: { } Scheduled: Successfully assigned job-5919/foo-9dkvq to node2 May 4 16:40:24.225: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for foo-sxtvr: { } Scheduled: Successfully assigned job-5919/foo-sxtvr to node2 May 4 16:40:24.225: INFO: At 2021-05-04 16:25:24 +0000 UTC - event for foo: {job-controller } SuccessfulCreate: Created pod: foo-9dkvq May 4 16:40:24.225: INFO: At 2021-05-04 16:25:24 +0000 UTC - event for foo: {job-controller } SuccessfulCreate: Created pod: foo-sxtvr May 4 16:40:24.225: INFO: At 2021-05-04 16:25:25 +0000 UTC - event for foo-sxtvr: {kubelet node2} Pulling: Pulling image "docker.io/library/busybox:1.29" May 4 16:40:24.225: INFO: At 2021-05-04 16:25:25 +0000 UTC - event for foo-sxtvr: {multus } AddedInterface: Add eth0 [10.244.3.251/24] May 4 16:40:24.225: INFO: At 2021-05-04 16:25:26 +0000 UTC - event for foo-9dkvq: {kubelet node2} Pulling: Pulling image "docker.io/library/busybox:1.29" May 4 16:40:24.225: INFO: At 2021-05-04 16:25:26 +0000 UTC - event for foo-9dkvq: {multus } AddedInterface: Add eth0 [10.244.3.252/24] May 4 16:40:24.225: INFO: At 2021-05-04 16:25:26 +0000 UTC - event for foo-sxtvr: {kubelet node2} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 4 16:40:24.225: INFO: At 2021-05-04 16:25:26 +0000 UTC - event for foo-sxtvr: {kubelet node2} Failed: Error: ErrImagePull May 4 16:40:24.225: INFO: At 2021-05-04 16:25:26 +0000 UTC - event for foo-sxtvr: {kubelet node2} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 4 16:40:24.225: INFO: At 2021-05-04 16:25:26 +0000 UTC - event for foo-sxtvr: {kubelet node2} Failed: Error: ImagePullBackOff May 4 16:40:24.225: INFO: At 2021-05-04 16:25:27 +0000 UTC - event for foo-9dkvq: {kubelet node2} Failed: Error: ErrImagePull May 4 16:40:24.225: INFO: At 2021-05-04 16:25:27 +0000 UTC - event for foo-9dkvq: {kubelet node2} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 4 16:40:24.225: INFO: At 2021-05-04 16:25:28 +0000 UTC - event for foo-9dkvq: {kubelet node2} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 4 16:40:24.225: INFO: At 2021-05-04 16:25:28 +0000 UTC - event for foo-9dkvq: {kubelet node2} Failed: Error: ImagePullBackOff May 4 16:40:24.227: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:40:24.227: INFO: foo-9dkvq node2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:25:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:25:24 +0000 UTC ContainersNotReady containers with unready status: [c]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:25:24 +0000 UTC ContainersNotReady containers with unready status: [c]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:25:24 +0000 UTC }] May 4 16:40:24.228: INFO: foo-sxtvr node2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:25:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:25:24 +0000 UTC ContainersNotReady containers with unready status: [c]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:25:24 +0000 UTC ContainersNotReady containers with unready status: [c]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-04 16:25:24 +0000 UTC }] May 4 16:40:24.228: INFO: May 4 16:40:24.231: INFO: Logging node info for node master1 May 4 16:40:24.234: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 db982204-549e-4532-90a7-a4410878cfc9 47039 0 2021-05-04 14:43:01 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"3e:f0:43:cb:66:52"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-04 14:51:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:46 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:40:23 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:40:23 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:40:23 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:40:23 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:88a0771919594d4187f6704fc7592bf8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8e0a253b-2aa4-4467-879e-567e7ba1ffa4,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:40:24.235: INFO: Logging kubelet events for node master1 May 4 16:40:24.237: INFO: Logging pods the kubelet thinks is on node master1 May 4 16:40:24.253: INFO: coredns-7677f9bb54-qvcd2 started at 2021-05-04 14:46:11 +0000 UTC (0+1 container statuses recorded) May 4 16:40:24.253: INFO: Container coredns ready: true, restart count 1 May 4 16:40:24.253: INFO: node-feature-discovery-controller-5bf5c49849-72rn6 started at 2021-05-04 14:51:52 +0000 UTC (0+1 container statuses recorded) May 4 16:40:24.253: INFO: Container nfd-controller ready: true, restart count 0 May 4 16:40:24.253: INFO: kube-flannel-qspzk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:40:24.253: INFO: Init container install-cni ready: true, restart count 0 May 4 16:40:24.253: INFO: Container kube-flannel ready: true, restart count 3 May 4 16:40:24.253: INFO: kube-multus-ds-amd64-jflvf started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:40:24.253: INFO: Container kube-multus ready: true, restart count 1 May 4 16:40:24.253: INFO: docker-registry-docker-registry-56cbc7bc58-zhf8t started at 2021-05-04 14:48:42 +0000 UTC (0+2 container statuses recorded) May 4 16:40:24.253: INFO: Container docker-registry ready: true, restart count 0 May 4 16:40:24.253: INFO: Container nginx ready: true, restart count 0 May 4 16:40:24.253: INFO: node-exporter-jckjs started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:40:24.253: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:40:24.253: INFO: Container node-exporter ready: true, restart count 0 May 4 16:40:24.253: INFO: kube-scheduler-master1 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:40:24.253: INFO: Container kube-scheduler ready: true, restart count 0 May 4 16:40:24.253: INFO: kube-apiserver-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:40:24.253: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:40:24.253: INFO: kube-controller-manager-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:40:24.253: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:40:24.253: INFO: kube-proxy-8j6ch started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:40:24.253: INFO: Container kube-proxy ready: true, restart count 1 W0504 16:40:24.267257 34 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:40:24.297: INFO: Latency metrics for node master1 May 4 16:40:24.298: INFO: Logging node info for node master2 May 4 16:40:24.300: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 e2c15170-247b-4e7b-b818-abc807948bf8 47034 0 2021-05-04 14:43:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:e0:10:a0:e0:62"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:32 +0000 UTC,LastTransitionTime:2021-05-04 14:47:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:40:23 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:40:23 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:40:23 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:40:23 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6af568f56589422a9bd68e0270ce0f8c,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:bf27bb77-fad2-4b52-85c3-acb5113fc512,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:40:24.301: INFO: Logging kubelet events for node master2 May 4 16:40:24.302: INFO: Logging pods the kubelet thinks is on node master2 May 4 16:40:24.316: INFO: kube-apiserver-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:40:24.316: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:40:24.316: INFO: kube-controller-manager-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:40:24.316: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:40:24.316: INFO: kube-scheduler-master2 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:40:24.316: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:40:24.316: INFO: kube-proxy-6b5t8 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:40:24.316: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:40:24.316: INFO: kube-flannel-cxdfr started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:40:24.316: INFO: Init container install-cni ready: true, restart count 0 May 4 16:40:24.316: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:40:24.316: INFO: kube-multus-ds-amd64-dw8tg started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:40:24.316: INFO: Container kube-multus ready: true, restart count 1 May 4 16:40:24.316: INFO: dns-autoscaler-5b7b5c9b6f-zbrsq started at 2021-05-04 14:46:08 +0000 UTC (0+1 container statuses recorded) May 4 16:40:24.316: INFO: Container autoscaler ready: true, restart count 1 May 4 16:40:24.316: INFO: node-exporter-9c6qf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:40:24.316: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:40:24.316: INFO: Container node-exporter ready: true, restart count 0 W0504 16:40:24.329412 34 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:40:24.352: INFO: Latency metrics for node master2 May 4 16:40:24.352: INFO: Logging node info for node master3 May 4 16:40:24.355: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 b533a646-667e-403c-944b-71dec9cc4851 47033 0 2021-05-04 14:43:51 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:b0:53:14:f6:c9"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:48:50 +0000 UTC,LastTransitionTime:2021-05-04 14:48:50 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:40:23 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:40:23 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:40:23 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:40:23 +0000 UTC,LastTransitionTime:2021-05-04 14:45:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:959373dcda56494486f0c2bb0bb496cc,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:0714ca81-c21e-40d6-a288-48d597238e54,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:40:24.355: INFO: Logging kubelet events for node master3 May 4 16:40:24.357: INFO: Logging pods the kubelet thinks is on node master3 May 4 16:40:24.373: INFO: coredns-7677f9bb54-pshfb started at 2021-05-04 14:46:06 +0000 UTC (0+1 container statuses recorded) May 4 16:40:24.373: INFO: Container coredns ready: true, restart count 1 May 4 16:40:24.373: INFO: node-exporter-wvppn started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:40:24.373: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:40:24.373: INFO: Container node-exporter ready: true, restart count 0 May 4 16:40:24.373: INFO: kube-apiserver-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:40:24.373: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:40:24.373: INFO: kube-controller-manager-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:40:24.373: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:40:24.373: INFO: kube-scheduler-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:40:24.373: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:40:24.373: INFO: kube-proxy-2p5b6 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:40:24.373: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:40:24.373: INFO: kube-flannel-wznt8 started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:40:24.373: INFO: Init container install-cni ready: true, restart count 0 May 4 16:40:24.373: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:40:24.373: INFO: kube-multus-ds-amd64-cgwz2 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:40:24.373: INFO: Container kube-multus ready: true, restart count 1 W0504 16:40:24.384789 34 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:40:24.414: INFO: Latency metrics for node master3 May 4 16:40:24.414: INFO: Logging node info for node node1 May 4 16:40:24.417: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 b8cf7e16-d5c7-4e2c-996a-93d93bd4fa1c 47015 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"e2:50:df:03:d2:13"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:54:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:40:19 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:40:19 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:40:19 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:40:19 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bffc023a4ab84df0b0181bc7b8f509e2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:dc08af72-abca-4f1d-bd0f-0e8d8eb97de5,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[@ :],SizeBytes:1002569035,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:371dc6bf7e0c7ce112a29341b000c40d840aef1dbb4fdcb3ae5c0597e28f3061 golang:alpine3.12],SizeBytes:301097267,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:40:24.418: INFO: Logging kubelet events for node node1 May 4 16:40:24.420: INFO: Logging pods the kubelet thinks is on node node1 May 4 16:40:24.464: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:40:24.464: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:40:24.464: INFO: nginx-proxy-node1 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:40:24.464: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:40:24.464: INFO: kube-proxy-t2mbn started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:40:24.464: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:40:24.464: INFO: liveness-http started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:40:24.464: INFO: Container liveness-http ready: true, restart count 24 May 4 16:40:24.464: INFO: cmk-init-discover-node1-m8vvw started at 2021-05-04 14:54:32 +0000 UTC (0+3 container statuses recorded) May 4 16:40:24.464: INFO: Container discover ready: false, restart count 0 May 4 16:40:24.464: INFO: Container init ready: false, restart count 0 May 4 16:40:24.464: INFO: Container install ready: false, restart count 0 May 4 16:40:24.464: INFO: kube-multus-ds-amd64-pkmbz started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:40:24.464: INFO: Container kube-multus ready: true, restart count 1 May 4 16:40:24.464: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:40:24.464: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 16:40:24.464: INFO: cmk-slg76 started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:40:24.464: INFO: Container nodereport ready: true, restart count 0 May 4 16:40:24.464: INFO: Container reconcile ready: true, restart count 0 May 4 16:40:24.464: INFO: prometheus-k8s-0 started at 2021-05-04 14:56:12 +0000 UTC (0+5 container statuses recorded) May 4 16:40:24.464: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:40:24.464: INFO: Container grafana ready: true, restart count 0 May 4 16:40:24.464: INFO: Container prometheus ready: true, restart count 1 May 4 16:40:24.464: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:40:24.464: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 4 16:40:24.464: INFO: kube-flannel-d6pbl started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:40:24.464: INFO: Init container install-cni ready: true, restart count 2 May 4 16:40:24.464: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:40:24.464: INFO: node-feature-discovery-worker-wfgl5 started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:40:24.464: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:40:24.464: INFO: prometheus-operator-5bb8cb9d8f-rrrhf started at 2021-05-04 14:56:03 +0000 UTC (0+2 container statuses recorded) May 4 16:40:24.464: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:40:24.464: INFO: Container prometheus-operator ready: true, restart count 0 May 4 16:40:24.464: INFO: node-exporter-k8qd9 started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:40:24.464: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:40:24.464: INFO: Container node-exporter ready: true, restart count 0 May 4 16:40:24.464: INFO: collectd-4755t started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:40:24.464: INFO: Container collectd ready: true, restart count 0 May 4 16:40:24.464: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:40:24.464: INFO: Container rbac-proxy ready: true, restart count 0 W0504 16:40:24.477338 34 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:40:24.671: INFO: Latency metrics for node node1 May 4 16:40:24.671: INFO: Logging node info for node node2 May 4 16:40:24.674: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 41567fa7-bb24-4381-9387-e4115195037d 47006 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:98:f5:3b:98:5c"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:40:16 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:40:16 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:40:16 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:40:16 +0000 UTC,LastTransitionTime:2021-05-04 14:45:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d47a1c7ab17f44f2ae7ff788700a8d74,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:df3523a1-a74f-4f8b-beb1-29f5ed8699f3,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:40:24.674: INFO: Logging kubelet events for node node2 May 4 16:40:24.676: INFO: Logging pods the kubelet thinks is on node node2 May 4 16:40:24.695: INFO: kube-flannel-lnwkk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:40:24.695: INFO: Init container install-cni ready: true, restart count 2 May 4 16:40:24.695: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:40:24.695: INFO: cmk-webhook-6c9d5f8578-fr595 started at 2021-05-04 14:55:15 +0000 UTC (0+1 container statuses recorded) May 4 16:40:24.695: INFO: Container cmk-webhook ready: true, restart count 0 May 4 16:40:24.695: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x started at 2021-05-04 14:59:02 +0000 UTC (0+2 container statuses recorded) May 4 16:40:24.695: INFO: Container tas-controller ready: true, restart count 0 May 4 16:40:24.695: INFO: Container tas-extender ready: true, restart count 0 May 4 16:40:24.695: INFO: foo-9dkvq started at 2021-05-04 16:25:24 +0000 UTC (0+1 container statuses recorded) May 4 16:40:24.695: INFO: Container c ready: false, restart count 0 May 4 16:40:24.695: INFO: liveness-exec started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:40:24.695: INFO: Container liveness-exec ready: false, restart count 6 May 4 16:40:24.695: INFO: kube-proxy-rfjjf started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:40:24.695: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:40:24.695: INFO: node-feature-discovery-worker-jzjqs started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:40:24.695: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:40:24.695: INFO: node-exporter-5lghf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:40:24.695: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:40:24.695: INFO: Container node-exporter ready: true, restart count 0 May 4 16:40:24.695: INFO: kube-multus-ds-amd64-7r2s4 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:40:24.695: INFO: Container kube-multus ready: true, restart count 1 May 4 16:40:24.695: INFO: cmk-init-discover-node2-zlxzj started at 2021-05-04 14:54:52 +0000 UTC (0+3 container statuses recorded) May 4 16:40:24.695: INFO: Container discover ready: false, restart count 0 May 4 16:40:24.695: INFO: Container init ready: false, restart count 0 May 4 16:40:24.695: INFO: Container install ready: false, restart count 0 May 4 16:40:24.695: INFO: collectd-dhwfp started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:40:24.695: INFO: Container collectd ready: true, restart count 0 May 4 16:40:24.695: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:40:24.695: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:40:24.695: INFO: foo-sxtvr started at 2021-05-04 16:25:24 +0000 UTC (0+1 container statuses recorded) May 4 16:40:24.695: INFO: Container c ready: false, restart count 0 May 4 16:40:24.695: INFO: nginx-proxy-node2 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:40:24.695: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:40:24.695: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:40:24.695: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 4 16:40:24.695: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:40:24.695: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:40:24.695: INFO: cmk-2fmbx started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:40:24.695: INFO: Container nodereport ready: true, restart count 0 May 4 16:40:24.695: INFO: Container reconcile ready: true, restart count 0 W0504 16:40:24.707827 34 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:40:24.739: INFO: Latency metrics for node node2 May 4 16:40:24.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5919" for this suite. • Failure [900.560 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:40:24.220: failed to ensure active pods == parallelism in namespace: job-5919 Unexpected error: <*errors.errorString | 0xc0003001f0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:163 ------------------------------ {"msg":"FAILED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":7,"skipped":259,"failed":5,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","[sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","[k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","[sig-apps] Job should delete a job [Conformance]"]} May 4 16:40:24.751: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":485,"failed":3,"failures":["[sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","[k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","[k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]"]} May 4 16:27:59.354: INFO: Running AfterSuite actions on all nodes May 4 16:40:24.833: INFO: Running AfterSuite actions on node 1 May 4 16:40:24.833: INFO: Skipping dumping logs from cluster Summarizing 35 Failures: [Fail] [sig-network] Services [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1760 [Fail] [sig-network] Services [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1242 [Fail] [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:809 [Fail] [sig-network] Services [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3444 [Fail] [sig-network] Services [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3511 [Fail] [sig-node] ConfigMap [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725 [Fail] [sig-node] Downward API [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725 [Fail] [sig-node] Downward API [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725 [Fail] [sig-api-machinery] Secrets [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725 [Fail] [k8s.io] Kubelet when scheduling a busybox command in a pod [It] should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:103 [Fail] [k8s.io] Variable Expansion [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725 [Fail] [sig-apps] Deployment [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:289 [Fail] [sig-network] Services [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3511 [Fail] [sig-node] ConfigMap [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725 [Fail] [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:58 [Fail] [k8s.io] [sig-node] PreStop [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:107 [Fail] [sig-cli] Kubectl client Kubectl replace [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1567 [Fail] [k8s.io] Pods [It] should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:103 [Fail] [k8s.io] Container Runtime blackbox test on terminated container [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:154 [Fail] [k8s.io] InitContainer [NodeConformance] [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:225 [Fail] [k8s.io] Probing container [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:426 [Fail] [k8s.io] Pods [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:527 [Fail] [sig-apps] ReplicationController [It] should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:103 [Fail] [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:212 [Fail] [k8s.io] Container Runtime blackbox test on terminated container [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:154 [Fail] [sig-storage] EmptyDir volumes [It] pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:103 [Fail] [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] [It] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:58 [Fail] [k8s.io] Pods [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:103 [Fail] [k8s.io] Variable Expansion [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:331 [Fail] [k8s.io] Variable Expansion [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:270 [Fail] [sig-auth] ServiceAccounts [It] should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:230 [Fail] [k8s.io] Security Context When creating a container with runAsUser [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:212 [Fail] [sig-node] Downward API [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725 [Fail] [sig-apps] Job [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:113 [Fail] [sig-apps] Job [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:163 Ran 286 of 5484 Specs in 2197.584 seconds FAIL! -- 251 Passed | 35 Failed | 0 Pending | 5198 Skipped Ginkgo ran 1 suite in 36m39.044809984s Test Suite Failed