Running Suite: Kubernetes e2e suite =================================== Random Seed: 1654903444 - Will randomize all specs Will run 5773 specs Running in parallel across 10 nodes Jun 10 23:24:05.769: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:05.771: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 10 23:24:05.793: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 10 23:24:05.844: INFO: The status of Pod cmk-init-discover-node1-hlbt6 is Succeeded, skipping waiting Jun 10 23:24:05.844: INFO: The status of Pod cmk-init-discover-node2-jxvbr is Succeeded, skipping waiting Jun 10 23:24:05.844: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 10 23:24:05.844: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Jun 10 23:24:05.844: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 10 23:24:05.861: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Jun 10 23:24:05.861: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Jun 10 23:24:05.861: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Jun 10 23:24:05.861: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Jun 10 23:24:05.861: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Jun 10 23:24:05.861: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Jun 10 23:24:05.861: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Jun 10 23:24:05.861: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 10 23:24:05.861: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Jun 10 23:24:05.861: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Jun 10 23:24:05.861: INFO: e2e test version: v1.21.9 Jun 10 23:24:05.862: INFO: kube-apiserver version: v1.21.1 Jun 10 23:24:05.863: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:05.870: INFO: Cluster IP family: ipv4 SSSSSS ------------------------------ Jun 10 23:24:05.865: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:05.885: INFO: Cluster IP family: ipv4 Jun 10 23:24:05.867: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:05.887: INFO: Cluster IP family: ipv4 SSS ------------------------------ Jun 10 23:24:05.870: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:05.890: INFO: Cluster IP family: ipv4 SSS ------------------------------ Jun 10 23:24:05.872: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:05.895: INFO: Cluster IP family: ipv4 SSSSSSSSSSS ------------------------------ Jun 10 23:24:05.878: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:05.903: INFO: Cluster IP family: ipv4 SSSSSS ------------------------------ Jun 10 23:24:05.886: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:05.904: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Jun 10 23:24:05.898: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:05.919: INFO: Cluster IP family: ipv4 SSSS ------------------------------ Jun 10 23:24:05.904: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:05.921: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Jun 10 23:24:05.905: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:05.932: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:24:06.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crictl W0610 23:24:06.549384 37 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 10 23:24:06.549: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 10 23:24:06.551: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:33 Jun 10 23:24:06.553: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:24:06.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crictl-5526" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.035 seconds] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should be able to run crictl on the node [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:40 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:35 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:24:05.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime W0610 23:24:05.961238 33 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 10 23:24:05.961: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 10 23:24:05.964: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:24:17.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1233" for this suite. • [SLOW TEST:11.099 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":1,"skipped":11,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:24:06.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime W0610 23:24:06.504793 32 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 10 23:24:06.505: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 10 23:24:06.506: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 STEP: create image pull secret STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:24:20.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1199" for this suite. • [SLOW TEST:14.125 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":1,"skipped":196,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:24:06.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples W0610 23:24:06.572613 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 10 23:24:06.572: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 10 23:24:06.574: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Jun 10 23:24:06.582: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that prints his name and namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:134 STEP: creating the pod Jun 10 23:24:06.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-5303 create -f -' Jun 10 23:24:07.141: INFO: stderr: "" Jun 10 23:24:07.141: INFO: stdout: "pod/dapi-test-pod created\n" STEP: checking if name and namespace were passed correctly Jun 10 23:24:23.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-5303 logs dapi-test-pod test-container' Jun 10 23:24:23.331: INFO: stderr: "" Jun 10 23:24:23.331: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.233.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-5303\nMY_POD_IP=10.244.3.159\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.207\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" Jun 10 23:24:23.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-5303 logs dapi-test-pod test-container' Jun 10 23:24:23.518: INFO: stderr: "" Jun 10 23:24:23.518: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.233.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-5303\nMY_POD_IP=10.244.3.159\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.207\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:24:23.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-5303" for this suite. • [SLOW TEST:16.975 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:133 should create a pod that prints his name and namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:134 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Downward API should create a pod that prints his name and namespace","total":-1,"completed":1,"skipped":232,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:24:06.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context W0610 23:24:06.079946 30 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 10 23:24:06.080: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 10 23:24:06.081: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Jun 10 23:24:06.094: INFO: Waiting up to 5m0s for pod "security-context-f3175c0e-4222-487d-b611-b3c5acc0d1bf" in namespace "security-context-2410" to be "Succeeded or Failed" Jun 10 23:24:06.097: INFO: Pod "security-context-f3175c0e-4222-487d-b611-b3c5acc0d1bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.730078ms Jun 10 23:24:08.102: INFO: Pod "security-context-f3175c0e-4222-487d-b611-b3c5acc0d1bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007129659s Jun 10 23:24:10.105: INFO: Pod "security-context-f3175c0e-4222-487d-b611-b3c5acc0d1bf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010585873s Jun 10 23:24:12.109: INFO: Pod "security-context-f3175c0e-4222-487d-b611-b3c5acc0d1bf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01435603s Jun 10 23:24:14.113: INFO: Pod "security-context-f3175c0e-4222-487d-b611-b3c5acc0d1bf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018363473s Jun 10 23:24:16.118: INFO: Pod "security-context-f3175c0e-4222-487d-b611-b3c5acc0d1bf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.023080546s Jun 10 23:24:18.122: INFO: Pod "security-context-f3175c0e-4222-487d-b611-b3c5acc0d1bf": Phase="Pending", Reason="", readiness=false. Elapsed: 12.027026277s Jun 10 23:24:20.128: INFO: Pod "security-context-f3175c0e-4222-487d-b611-b3c5acc0d1bf": Phase="Pending", Reason="", readiness=false. Elapsed: 14.033604288s Jun 10 23:24:22.133: INFO: Pod "security-context-f3175c0e-4222-487d-b611-b3c5acc0d1bf": Phase="Pending", Reason="", readiness=false. Elapsed: 16.038460728s Jun 10 23:24:24.139: INFO: Pod "security-context-f3175c0e-4222-487d-b611-b3c5acc0d1bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.044526534s STEP: Saw pod success Jun 10 23:24:24.139: INFO: Pod "security-context-f3175c0e-4222-487d-b611-b3c5acc0d1bf" satisfied condition "Succeeded or Failed" Jun 10 23:24:24.141: INFO: Trying to get logs from node node2 pod security-context-f3175c0e-4222-487d-b611-b3c5acc0d1bf container test-container: STEP: delete the pod Jun 10 23:24:24.156: INFO: Waiting for pod security-context-f3175c0e-4222-487d-b611-b3c5acc0d1bf to disappear Jun 10 23:24:24.158: INFO: Pod security-context-f3175c0e-4222-487d-b611-b3c5acc0d1bf no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:24:24.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-2410" for this suite. • [SLOW TEST:18.108 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":1,"skipped":32,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:24:20.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Jun 10 23:24:20.683: INFO: Waiting up to 5m0s for pod "security-context-6fc1a1b9-5380-4c6f-b59b-8435c2073bab" in namespace "security-context-9412" to be "Succeeded or Failed" Jun 10 23:24:20.685: INFO: Pod "security-context-6fc1a1b9-5380-4c6f-b59b-8435c2073bab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13051ms Jun 10 23:24:22.689: INFO: Pod "security-context-6fc1a1b9-5380-4c6f-b59b-8435c2073bab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006313583s Jun 10 23:24:24.693: INFO: Pod "security-context-6fc1a1b9-5380-4c6f-b59b-8435c2073bab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009422009s STEP: Saw pod success Jun 10 23:24:24.693: INFO: Pod "security-context-6fc1a1b9-5380-4c6f-b59b-8435c2073bab" satisfied condition "Succeeded or Failed" Jun 10 23:24:24.695: INFO: Trying to get logs from node node1 pod security-context-6fc1a1b9-5380-4c6f-b59b-8435c2073bab container test-container: STEP: delete the pod Jun 10 23:24:24.724: INFO: Waiting for pod security-context-6fc1a1b9-5380-4c6f-b59b-8435c2073bab to disappear Jun 10 23:24:24.725: INFO: Pod security-context-6fc1a1b9-5380-4c6f-b59b-8435c2073bab no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:24:24.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-9412" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]","total":-1,"completed":2,"skipped":212,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:24:25.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:201 STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node STEP: Watching for error events or started pod STEP: Checking that the pod was rejected [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:24:27.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-3896" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]","total":-1,"completed":3,"skipped":369,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:24:27.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 Jun 10 23:24:27.236: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:24:27.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-6246" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 can disable an AppArmor profile, using unconfined [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:47 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:284 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:24:17.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:24:27.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6707" for this suite. • [SLOW TEST:10.052 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":2,"skipped":153,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:24:06.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W0610 23:24:06.130056 31 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 10 23:24:06.130: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 10 23:24:06.131: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274 STEP: Creating pod liveness-f19d91c2-1c52-4fa8-ad9d-f2ff6056ef2e in namespace container-probe-9658 Jun 10 23:24:18.154: INFO: Started pod liveness-f19d91c2-1c52-4fa8-ad9d-f2ff6056ef2e in namespace container-probe-9658 STEP: checking the pod's current state and verifying that restartCount is present Jun 10 23:24:18.157: INFO: Initial restart count of pod liveness-f19d91c2-1c52-4fa8-ad9d-f2ff6056ef2e is 0 Jun 10 23:24:28.178: INFO: Restart count of pod container-probe-9658/liveness-f19d91c2-1c52-4fa8-ad9d-f2ff6056ef2e is now 1 (10.021513043s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:24:28.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9658" for this suite. • [SLOW TEST:22.086 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":1,"skipped":37,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:24:24.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Jun 10 23:24:24.046: INFO: Waiting up to 5m0s for pod "security-context-a829d16f-c038-4914-ae67-d5ca036d4466" in namespace "security-context-9470" to be "Succeeded or Failed" Jun 10 23:24:24.048: INFO: Pod "security-context-a829d16f-c038-4914-ae67-d5ca036d4466": Phase="Pending", Reason="", readiness=false. Elapsed: 2.460249ms Jun 10 23:24:26.053: INFO: Pod "security-context-a829d16f-c038-4914-ae67-d5ca036d4466": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006878481s Jun 10 23:24:28.059: INFO: Pod "security-context-a829d16f-c038-4914-ae67-d5ca036d4466": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013616463s Jun 10 23:24:30.064: INFO: Pod "security-context-a829d16f-c038-4914-ae67-d5ca036d4466": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017677916s STEP: Saw pod success Jun 10 23:24:30.064: INFO: Pod "security-context-a829d16f-c038-4914-ae67-d5ca036d4466" satisfied condition "Succeeded or Failed" Jun 10 23:24:30.066: INFO: Trying to get logs from node node2 pod security-context-a829d16f-c038-4914-ae67-d5ca036d4466 container test-container: STEP: delete the pod Jun 10 23:24:30.246: INFO: Waiting for pod security-context-a829d16f-c038-4914-ae67-d5ca036d4466 to disappear Jun 10 23:24:30.249: INFO: Pod security-context-a829d16f-c038-4914-ae67-d5ca036d4466 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:24:30.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-9470" for this suite. • [SLOW TEST:6.254 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":2,"skipped":474,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:24:24.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:449 STEP: Creating pod liveness-override-1f38067d-c65e-4319-9ca3-e0ee892b7210 in namespace container-probe-6162 Jun 10 23:24:28.506: INFO: Started pod liveness-override-1f38067d-c65e-4319-9ca3-e0ee892b7210 in namespace container-probe-6162 STEP: checking the pod's current state and verifying that restartCount is present Jun 10 23:24:28.508: INFO: Initial restart count of pod liveness-override-1f38067d-c65e-4319-9ca3-e0ee892b7210 is 0 Jun 10 23:24:32.519: INFO: Restart count of pod container-probe-6162/liveness-override-1f38067d-c65e-4319-9ca3-e0ee892b7210 is now 1 (4.01086535s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:24:32.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6162" for this suite. • [SLOW TEST:8.071 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:449 ------------------------------ {"msg":"PASSED [sig-node] Probing container should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod]","total":-1,"completed":2,"skipped":176,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:24:28.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Jun 10 23:24:28.136: INFO: Waiting up to 5m0s for pod "security-context-ee7052e3-d1b6-46b3-bad8-218c810ffe85" in namespace "security-context-4533" to be "Succeeded or Failed" Jun 10 23:24:28.139: INFO: Pod "security-context-ee7052e3-d1b6-46b3-bad8-218c810ffe85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.16568ms Jun 10 23:24:30.142: INFO: Pod "security-context-ee7052e3-d1b6-46b3-bad8-218c810ffe85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00536053s Jun 10 23:24:32.148: INFO: Pod "security-context-ee7052e3-d1b6-46b3-bad8-218c810ffe85": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012042718s Jun 10 23:24:34.152: INFO: Pod "security-context-ee7052e3-d1b6-46b3-bad8-218c810ffe85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015136236s STEP: Saw pod success Jun 10 23:24:34.152: INFO: Pod "security-context-ee7052e3-d1b6-46b3-bad8-218c810ffe85" satisfied condition "Succeeded or Failed" Jun 10 23:24:34.154: INFO: Trying to get logs from node node2 pod security-context-ee7052e3-d1b6-46b3-bad8-218c810ffe85 container test-container: STEP: delete the pod Jun 10 23:24:34.170: INFO: Waiting for pod security-context-ee7052e3-d1b6-46b3-bad8-218c810ffe85 to disappear Jun 10 23:24:34.173: INFO: Pod security-context-ee7052e3-d1b6-46b3-bad8-218c810ffe85 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:24:34.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-4533" for this suite. • [SLOW TEST:6.078 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":4,"skipped":846,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:24:28.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 Jun 10 23:24:28.261: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-e5a4f0e1-70e7-4f63-80f7-5e74d48089e9" in namespace "security-context-test-3514" to be "Succeeded or Failed" Jun 10 23:24:28.264: INFO: Pod "busybox-readonly-true-e5a4f0e1-70e7-4f63-80f7-5e74d48089e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.301977ms Jun 10 23:24:30.266: INFO: Pod "busybox-readonly-true-e5a4f0e1-70e7-4f63-80f7-5e74d48089e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005188233s Jun 10 23:24:32.273: INFO: Pod "busybox-readonly-true-e5a4f0e1-70e7-4f63-80f7-5e74d48089e9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012262436s Jun 10 23:24:34.277: INFO: Pod "busybox-readonly-true-e5a4f0e1-70e7-4f63-80f7-5e74d48089e9": Phase="Failed", Reason="", readiness=false. Elapsed: 6.015399992s Jun 10 23:24:34.277: INFO: Pod "busybox-readonly-true-e5a4f0e1-70e7-4f63-80f7-5e74d48089e9" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:24:34.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3514" for this suite. • [SLOW TEST:6.058 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with readOnlyRootFilesystem /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171 should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":51,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:24:34.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-problem-detector STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:52 Jun 10 23:24:34.488: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:24:34.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-problem-detector-9318" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.042 seconds] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should run without error [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:60 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:53 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:24:34.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-privileged-pod STEP: Waiting for a default service account to be provisioned in namespace [It] should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49 STEP: Creating a pod with a privileged container Jun 10 23:24:34.452: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:24:36.456: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:24:38.456: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:24:40.460: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:24:42.455: INFO: The status of Pod privileged-pod is Running (Ready = true) STEP: Executing in the privileged container Jun 10 23:24:42.458: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-587 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:42.459: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:43.066: INFO: ExecWithOptions {Command:[ip link del dummy1] Namespace:e2e-privileged-pod-587 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:43.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Executing in the non-privileged container Jun 10 23:24:43.905: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-587 PodName:privileged-pod ContainerName:not-privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:43.905: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:24:43.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-privileged-pod-587" for this suite. • [SLOW TEST:9.598 seconds] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49 ------------------------------ {"msg":"PASSED [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":5,"skipped":969,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:24:32.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 Jun 10 23:24:32.680: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-2089" to be "Succeeded or Failed" Jun 10 23:24:32.682: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.269299ms Jun 10 23:24:34.687: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006962447s Jun 10 23:24:36.691: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010961028s Jun 10 23:24:38.695: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015679044s Jun 10 23:24:40.703: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 8.023616573s Jun 10 23:24:42.707: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 10.027544513s Jun 10 23:24:44.710: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 12.030220443s Jun 10 23:24:46.713: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 14.033439761s Jun 10 23:24:48.720: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.040296052s Jun 10 23:24:48.720: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:24:49.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2089" for this suite. • [SLOW TEST:16.402 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":3,"skipped":233,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:24:06.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename mount-propagation STEP: Waiting for a default service account to be provisioned in namespace [It] should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 Jun 10 23:24:06.669: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:24:08.673: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:24:10.675: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:24:12.674: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:24:14.672: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:24:16.674: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:24:18.675: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:24:20.673: INFO: The status of Pod master is Running (Ready = true) Jun 10 23:24:20.691: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:24:22.698: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:24:24.694: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:24:26.695: INFO: The status of Pod slave is Running (Ready = true) Jun 10 23:24:26.712: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:24:28.716: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:24:30.718: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:24:32.718: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:24:34.715: INFO: The status of Pod private is Running (Ready = true) Jun 10 23:24:34.731: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:24:36.734: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:24:38.736: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:24:40.735: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:24:42.738: INFO: The status of Pod default is Running (Ready = true) Jun 10 23:24:42.743: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-7145 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:42.743: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:43.892: INFO: Exec stderr: "" Jun 10 23:24:43.896: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-7145 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:43.896: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:43.997: INFO: Exec stderr: "" Jun 10 23:24:44.000: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-7145 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:44.000: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:44.104: INFO: Exec stderr: "" Jun 10 23:24:44.107: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-7145 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:44.107: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:44.191: INFO: Exec stderr: "" Jun 10 23:24:44.193: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-7145 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:44.193: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:44.280: INFO: Exec stderr: "" Jun 10 23:24:44.282: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-7145 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:44.282: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:44.371: INFO: Exec stderr: "" Jun 10 23:24:44.374: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-7145 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:44.374: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:44.461: INFO: Exec stderr: "" Jun 10 23:24:44.464: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-7145 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:44.464: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:44.553: INFO: Exec stderr: "" Jun 10 23:24:44.556: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-7145 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:44.556: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:44.641: INFO: Exec stderr: "" Jun 10 23:24:44.644: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-7145 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:44.644: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:44.760: INFO: Exec stderr: "" Jun 10 23:24:44.762: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-7145 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:44.762: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:44.851: INFO: Exec stderr: "" Jun 10 23:24:44.853: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-7145 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:44.853: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:44.953: INFO: Exec stderr: "" Jun 10 23:24:44.955: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-7145 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:44.955: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:45.052: INFO: Exec stderr: "" Jun 10 23:24:45.055: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-7145 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:45.055: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:45.144: INFO: Exec stderr: "" Jun 10 23:24:45.146: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-7145 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:45.146: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:45.230: INFO: Exec stderr: "" Jun 10 23:24:45.232: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-7145 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:45.232: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:45.321: INFO: Exec stderr: "" Jun 10 23:24:45.324: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-master /mnt/test/master; echo master > /mnt/test/master/file] Namespace:mount-propagation-7145 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:45.324: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:45.410: INFO: Exec stderr: "" Jun 10 23:24:45.412: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-slave /mnt/test/slave; echo slave > /mnt/test/slave/file] Namespace:mount-propagation-7145 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:45.412: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:45.500: INFO: Exec stderr: "" Jun 10 23:24:45.503: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-private /mnt/test/private; echo private > /mnt/test/private/file] Namespace:mount-propagation-7145 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:45.503: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:45.588: INFO: Exec stderr: "" Jun 10 23:24:45.591: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-default /mnt/test/default; echo default > /mnt/test/default/file] Namespace:mount-propagation-7145 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:45.591: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:45.671: INFO: Exec stderr: "" Jun 10 23:24:51.695: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-7145"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-7145"/host; echo host > "/var/lib/kubelet/mount-propagation-7145"/host/file] Namespace:mount-propagation-7145 PodName:hostexec-node1-j8vs7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 10 23:24:51.695: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:51.794: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-7145 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:51.794: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:51.877: INFO: pod master mount master: stdout: "master", stderr: "" error: Jun 10 23:24:51.880: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-7145 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:51.880: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:52.007: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Jun 10 23:24:52.010: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-7145 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:52.010: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:52.098: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Jun 10 23:24:52.101: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-7145 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:52.101: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:52.219: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Jun 10 23:24:52.222: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-7145 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:52.222: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:52.308: INFO: pod master mount host: stdout: "host", stderr: "" error: Jun 10 23:24:52.311: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-7145 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:52.311: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:52.402: INFO: pod slave mount master: stdout: "master", stderr: "" error: Jun 10 23:24:52.404: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-7145 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:52.404: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:52.493: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: Jun 10 23:24:52.496: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-7145 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:52.496: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:52.586: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Jun 10 23:24:52.588: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-7145 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:52.589: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:52.673: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Jun 10 23:24:52.676: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-7145 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:52.676: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:52.786: INFO: pod slave mount host: stdout: "host", stderr: "" error: Jun 10 23:24:52.788: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-7145 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:52.788: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:52.868: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 Jun 10 23:24:52.872: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-7145 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:52.872: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:53.051: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Jun 10 23:24:53.055: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-7145 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:53.055: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:53.252: INFO: pod private mount private: stdout: "private", stderr: "" error: Jun 10 23:24:53.255: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-7145 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:53.255: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:53.353: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Jun 10 23:24:53.355: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-7145 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:53.355: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:53.501: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 Jun 10 23:24:53.505: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-7145 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:53.505: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:53.633: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 Jun 10 23:24:53.636: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-7145 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:53.636: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:53.726: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Jun 10 23:24:53.728: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-7145 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:53.728: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:53.828: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Jun 10 23:24:53.832: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-7145 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:53.832: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:53.912: INFO: pod default mount default: stdout: "default", stderr: "" error: Jun 10 23:24:53.915: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-7145 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:53.915: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:54.000: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 Jun 10 23:24:54.000: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test `cat "/var/lib/kubelet/mount-propagation-7145"/master/file` = master] Namespace:mount-propagation-7145 PodName:hostexec-node1-j8vs7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 10 23:24:54.000: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:54.084: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -e "/var/lib/kubelet/mount-propagation-7145"/slave/file] Namespace:mount-propagation-7145 PodName:hostexec-node1-j8vs7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 10 23:24:54.084: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:54.167: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/var/lib/kubelet/mount-propagation-7145"/host] Namespace:mount-propagation-7145 PodName:hostexec-node1-j8vs7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 10 23:24:54.167: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:54.264: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/default] Namespace:mount-propagation-7145 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:54.264: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:54.384: INFO: Exec stderr: "" Jun 10 23:24:54.386: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/private] Namespace:mount-propagation-7145 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:54.386: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:54.478: INFO: Exec stderr: "" Jun 10 23:24:54.481: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/slave] Namespace:mount-propagation-7145 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:54.481: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:54.573: INFO: Exec stderr: "" Jun 10 23:24:54.576: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/master] Namespace:mount-propagation-7145 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 23:24:54.576: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:24:54.661: INFO: Exec stderr: "" Jun 10 23:24:54.661: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -rf "/var/lib/kubelet/mount-propagation-7145"] Namespace:mount-propagation-7145 PodName:hostexec-node1-j8vs7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 10 23:24:54.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod hostexec-node1-j8vs7 in namespace mount-propagation-7145 [AfterEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:24:54.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "mount-propagation-7145" for this suite. • [SLOW TEST:48.137 seconds] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 ------------------------------ {"msg":"PASSED [sig-node] Mount propagation should propagate mounts to the host","total":-1,"completed":1,"skipped":263,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:24:49.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109 STEP: Creating a pod to test downward api env vars Jun 10 23:24:49.187: INFO: Waiting up to 5m0s for pod "downward-api-fcc4a4ff-b5d9-4cd2-a67d-a0374b3a299e" in namespace "downward-api-4485" to be "Succeeded or Failed" Jun 10 23:24:49.190: INFO: Pod "downward-api-fcc4a4ff-b5d9-4cd2-a67d-a0374b3a299e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.151176ms Jun 10 23:24:51.193: INFO: Pod "downward-api-fcc4a4ff-b5d9-4cd2-a67d-a0374b3a299e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005270597s Jun 10 23:24:53.196: INFO: Pod "downward-api-fcc4a4ff-b5d9-4cd2-a67d-a0374b3a299e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008944642s Jun 10 23:24:55.199: INFO: Pod "downward-api-fcc4a4ff-b5d9-4cd2-a67d-a0374b3a299e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.011886552s STEP: Saw pod success Jun 10 23:24:55.199: INFO: Pod "downward-api-fcc4a4ff-b5d9-4cd2-a67d-a0374b3a299e" satisfied condition "Succeeded or Failed" Jun 10 23:24:55.202: INFO: Trying to get logs from node node1 pod downward-api-fcc4a4ff-b5d9-4cd2-a67d-a0374b3a299e container dapi-container: STEP: delete the pod Jun 10 23:24:55.604: INFO: Waiting for pod downward-api-fcc4a4ff-b5d9-4cd2-a67d-a0374b3a299e to disappear Jun 10 23:24:55.606: INFO: Pod downward-api-fcc4a4ff-b5d9-4cd2-a67d-a0374b3a299e no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:24:55.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4485" for this suite. • [SLOW TEST:6.460 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":4,"skipped":283,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:24:06.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet W0610 23:24:06.206115 39 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 10 23:24:06.206: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 10 23:24:06.208: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:274 [BeforeEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:295 [It] kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 STEP: Creating a RC of 20 pods and wait until all pods of this RC are running STEP: creating replication controller cleanup20-df9edd6c-bf7d-4428-86e9-4cdaa821f4f3 in namespace kubelet-4832 I0610 23:24:06.242348 39 runners.go:190] Created replication controller with name: cleanup20-df9edd6c-bf7d-4428-86e9-4cdaa821f4f3, namespace: kubelet-4832, replica count: 20 I0610 23:24:16.293391 39 runners.go:190] cleanup20-df9edd6c-bf7d-4428-86e9-4cdaa821f4f3 Pods: 20 out of 20 created, 1 running, 19 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0610 23:24:26.293742 39 runners.go:190] cleanup20-df9edd6c-bf7d-4428-86e9-4cdaa821f4f3 Pods: 20 out of 20 created, 17 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0610 23:24:36.294663 39 runners.go:190] cleanup20-df9edd6c-bf7d-4428-86e9-4cdaa821f4f3 Pods: 20 out of 20 created, 20 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 10 23:24:37.295: INFO: Checking pods on node node2 via /runningpods endpoint Jun 10 23:24:37.295: INFO: Checking pods on node node1 via /runningpods endpoint Jun 10 23:24:37.393: INFO: Resource usage on node "master3": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.367 3468.50 1467.35 "runtime" 0.100 511.30 236.19 "kubelet" 0.100 511.30 236.19 Resource usage on node "node1": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 2.010 6227.76 2270.26 "runtime" 0.901 2533.53 570.20 "kubelet" 0.901 2533.53 570.20 Resource usage on node "node2": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 1.615 4036.24 1156.93 "runtime" 1.349 1619.75 611.60 "kubelet" 1.349 1619.75 611.60 Resource usage on node "master1": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.672 4953.17 1762.66 "runtime" 0.116 727.92 319.18 "kubelet" 0.116 727.92 319.18 Resource usage on node "master2": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.550 3811.96 1694.95 "runtime" 0.096 582.39 239.81 "kubelet" 0.096 582.39 239.81 STEP: Deleting the RC STEP: deleting ReplicationController cleanup20-df9edd6c-bf7d-4428-86e9-4cdaa821f4f3 in namespace kubelet-4832, will wait for the garbage collector to delete the pods Jun 10 23:24:37.453: INFO: Deleting ReplicationController cleanup20-df9edd6c-bf7d-4428-86e9-4cdaa821f4f3 took: 6.30338ms Jun 10 23:24:38.054: INFO: Terminating ReplicationController cleanup20-df9edd6c-bf7d-4428-86e9-4cdaa821f4f3 pods took: 600.96048ms Jun 10 23:24:57.655: INFO: Checking pods on node node2 via /runningpods endpoint Jun 10 23:24:57.655: INFO: Checking pods on node node1 via /runningpods endpoint Jun 10 23:24:57.821: INFO: Deleting 20 pods on 2 nodes completed in 1.165815573s after the RC was deleted Jun 10 23:24:57.821: INFO: CPU usage of containers on node "master2" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.383 0.384 0.447 0.509 0.509 0.509 "runtime" 0.000 0.000 0.096 0.099 0.099 0.099 0.099 "kubelet" 0.000 0.000 0.096 0.099 0.099 0.099 0.099 CPU usage of containers on node "master3" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.322 0.361 0.367 0.398 0.398 0.398 "runtime" 0.000 0.000 0.100 0.100 0.100 0.100 0.100 "kubelet" 0.000 0.000 0.100 0.100 0.100 0.100 0.100 CPU usage of containers on node "node1" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 1.949 1.949 2.010 2.010 2.010 "runtime" 0.000 0.000 0.822 0.890 0.890 0.890 0.890 "kubelet" 0.000 0.000 0.822 0.890 0.890 0.890 0.890 CPU usage of containers on node "node2" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 1.615 1.615 1.798 1.798 1.798 "runtime" 0.000 0.000 0.772 0.886 0.886 0.886 0.886 "kubelet" 0.000 0.000 0.772 0.886 0.886 0.886 0.886 CPU usage of containers on node "master1" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.441 0.448 0.498 0.572 0.572 0.572 "runtime" 0.000 0.000 0.128 0.128 0.132 0.132 0.132 "kubelet" 0.000 0.000 0.128 0.128 0.132 0.132 0.132 [AfterEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:326 STEP: removing the label kubelet_cleanup off the node node2 STEP: verifying the node doesn't have the label kubelet_cleanup STEP: removing the label kubelet_cleanup off the node node1 STEP: verifying the node doesn't have the label kubelet_cleanup [AfterEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:24:57.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-4832" for this suite. • [SLOW TEST:51.672 seconds] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:279 kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 ------------------------------ {"msg":"PASSED [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":1,"skipped":76,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:24:57.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should reject a Pod requesting a RuntimeClass with conflicting node selector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:41 [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:24:57.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-4623" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector","total":-1,"completed":2,"skipped":117,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:24:58.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] should have OwnerReferences set /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:88 [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:24:58.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-4283" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":3,"skipped":206,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:24:34.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: waiting for pod running STEP: deleting the pod gracefully STEP: verifying the pod is running while in the graceful period termination Jun 10 23:25:04.681: INFO: pod is running [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:25:04.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-5415" for this suite. • [SLOW TEST:30.085 seconds] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 ------------------------------ {"msg":"PASSED [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process","total":-1,"completed":3,"skipped":194,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:24:06.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W0610 23:24:06.311585 34 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 10 23:24:06.311: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 10 23:24:06.314: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:217 STEP: Creating pod busybox-d9277bc0-71a0-4bff-afc5-31cb3099d8da in namespace container-probe-7174 Jun 10 23:24:18.336: INFO: Started pod busybox-d9277bc0-71a0-4bff-afc5-31cb3099d8da in namespace container-probe-7174 STEP: checking the pod's current state and verifying that restartCount is present Jun 10 23:24:18.338: INFO: Initial restart count of pod busybox-d9277bc0-71a0-4bff-afc5-31cb3099d8da is 0 Jun 10 23:25:08.448: INFO: Restart count of pod container-probe-7174/busybox-d9277bc0-71a0-4bff-afc5-31cb3099d8da is now 1 (50.109041656s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:25:08.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7174" for this suite. • [SLOW TEST:62.177 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:217 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":1,"skipped":124,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:24:58.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:446 [It] should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 STEP: creating the pod that should always exit 0 STEP: submitting the pod to kubernetes Jun 10 23:24:58.687: INFO: Waiting up to 5m0s for pod "pod-always-succeedd3faaf9c-7515-4cfe-a46b-6a1c5db848d7" in namespace "pods-7061" to be "Succeeded or Failed" Jun 10 23:24:58.689: INFO: Pod "pod-always-succeedd3faaf9c-7515-4cfe-a46b-6a1c5db848d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105686ms Jun 10 23:25:00.692: INFO: Pod "pod-always-succeedd3faaf9c-7515-4cfe-a46b-6a1c5db848d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005160959s Jun 10 23:25:02.695: INFO: Pod "pod-always-succeedd3faaf9c-7515-4cfe-a46b-6a1c5db848d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008284773s Jun 10 23:25:04.699: INFO: Pod "pod-always-succeedd3faaf9c-7515-4cfe-a46b-6a1c5db848d7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011781928s Jun 10 23:25:06.704: INFO: Pod "pod-always-succeedd3faaf9c-7515-4cfe-a46b-6a1c5db848d7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017364257s Jun 10 23:25:08.707: INFO: Pod "pod-always-succeedd3faaf9c-7515-4cfe-a46b-6a1c5db848d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.020631466s STEP: Saw pod success Jun 10 23:25:08.708: INFO: Pod "pod-always-succeedd3faaf9c-7515-4cfe-a46b-6a1c5db848d7" satisfied condition "Succeeded or Failed" STEP: Getting events about the pod STEP: Checking events about the pod STEP: deleting the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:25:10.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7061" for this suite. • [SLOW TEST:12.079 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:444 should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done","total":-1,"completed":4,"skipped":435,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:24:55.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Jun 10 23:24:55.871: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 STEP: creating secret and pod Jun 10 23:24:55.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-6205 create -f -' Jun 10 23:24:56.323: INFO: stderr: "" Jun 10 23:24:56.323: INFO: stdout: "secret/test-secret created\n" Jun 10 23:24:56.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-6205 create -f -' Jun 10 23:24:56.703: INFO: stderr: "" Jun 10 23:24:56.703: INFO: stdout: "pod/secret-test-pod created\n" STEP: checking if secret was read correctly Jun 10 23:25:10.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-6205 logs secret-test-pod test-container' Jun 10 23:25:10.900: INFO: stderr: "" Jun 10 23:25:10.900: INFO: stdout: "content of file \"/etc/secret-volume/data-1\": value-1\r\n\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:25:10.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-6205" for this suite. • [SLOW TEST:15.068 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:113 should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Secret should create a pod that reads a secret","total":-1,"completed":5,"skipped":397,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:25:11.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should create and update a lease in the kube-node-lease namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:50 STEP: check that lease for this Kubelet exists in the kube-node-lease namespace STEP: check that node lease is updated at least once within the lease duration [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:25:11.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-3801" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":6,"skipped":573,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:25:10.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 Jun 10 23:25:10.953: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-11451d67-8027-43bb-8caf-671c9943d0dd" in namespace "security-context-test-4572" to be "Succeeded or Failed" Jun 10 23:25:10.956: INFO: Pod "busybox-privileged-true-11451d67-8027-43bb-8caf-671c9943d0dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.838702ms Jun 10 23:25:12.960: INFO: Pod "busybox-privileged-true-11451d67-8027-43bb-8caf-671c9943d0dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007626497s Jun 10 23:25:14.965: INFO: Pod "busybox-privileged-true-11451d67-8027-43bb-8caf-671c9943d0dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012290873s Jun 10 23:25:16.971: INFO: Pod "busybox-privileged-true-11451d67-8027-43bb-8caf-671c9943d0dd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.018519843s Jun 10 23:25:18.976: INFO: Pod "busybox-privileged-true-11451d67-8027-43bb-8caf-671c9943d0dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.023511031s Jun 10 23:25:18.976: INFO: Pod "busybox-privileged-true-11451d67-8027-43bb-8caf-671c9943d0dd" satisfied condition "Succeeded or Failed" Jun 10 23:25:18.982: INFO: Got logs for pod "busybox-privileged-true-11451d67-8027-43bb-8caf-671c9943d0dd": "" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:25:18.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4572" for this suite. • [SLOW TEST:8.071 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with privileged /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232 should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":5,"skipped":533,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:25:19.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should update ConfigMap successfully /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/configmap.go:140 STEP: Creating ConfigMap configmap-5915/configmap-test-5bb5c107-ef56-4d6b-9580-b6614c18de89 STEP: Updating configMap configmap-5915/configmap-test-5bb5c107-ef56-4d6b-9580-b6614c18de89 STEP: Verifying update of ConfigMap configmap-5915/configmap-test-5bb5c107-ef56-4d6b-9580-b6614c18de89 [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:25:19.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5915" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":6,"skipped":542,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:25:11.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups Jun 10 23:25:11.587: INFO: Waiting up to 5m0s for pod "security-context-61d8af8e-119a-4a25-9756-001b1c2c398e" in namespace "security-context-2968" to be "Succeeded or Failed" Jun 10 23:25:11.590: INFO: Pod "security-context-61d8af8e-119a-4a25-9756-001b1c2c398e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.393581ms Jun 10 23:25:13.593: INFO: Pod "security-context-61d8af8e-119a-4a25-9756-001b1c2c398e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005627671s Jun 10 23:25:15.599: INFO: Pod "security-context-61d8af8e-119a-4a25-9756-001b1c2c398e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011063581s Jun 10 23:25:17.604: INFO: Pod "security-context-61d8af8e-119a-4a25-9756-001b1c2c398e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015996072s Jun 10 23:25:19.607: INFO: Pod "security-context-61d8af8e-119a-4a25-9756-001b1c2c398e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019033347s Jun 10 23:25:21.611: INFO: Pod "security-context-61d8af8e-119a-4a25-9756-001b1c2c398e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.023120098s STEP: Saw pod success Jun 10 23:25:21.611: INFO: Pod "security-context-61d8af8e-119a-4a25-9756-001b1c2c398e" satisfied condition "Succeeded or Failed" Jun 10 23:25:21.613: INFO: Trying to get logs from node node2 pod security-context-61d8af8e-119a-4a25-9756-001b1c2c398e container test-container: STEP: delete the pod Jun 10 23:25:21.627: INFO: Waiting for pod security-context-61d8af8e-119a-4a25-9756-001b1c2c398e to disappear Jun 10 23:25:21.630: INFO: Pod security-context-61d8af8e-119a-4a25-9756-001b1c2c398e no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:25:21.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-2968" for this suite. • [SLOW TEST:10.088 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":7,"skipped":709,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:24:27.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a failing exec liveness probe that took longer than the timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:254 STEP: Creating pod busybox-31a916ff-2b7f-4e18-babb-6bda47ffd6b5 in namespace container-probe-8708 Jun 10 23:24:33.632: INFO: Started pod busybox-31a916ff-2b7f-4e18-babb-6bda47ffd6b5 in namespace container-probe-8708 STEP: checking the pod's current state and verifying that restartCount is present Jun 10 23:24:33.634: INFO: Initial restart count of pod busybox-31a916ff-2b7f-4e18-babb-6bda47ffd6b5 is 0 Jun 10 23:25:21.730: INFO: Restart count of pod container-probe-8708/busybox-31a916ff-2b7f-4e18-babb-6bda47ffd6b5 is now 1 (48.096196348s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:25:21.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8708" for this suite. • [SLOW TEST:54.151 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a failing exec liveness probe that took longer than the timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:254 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout","total":-1,"completed":3,"skipped":229,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:24:06.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W0610 23:24:06.174211 24 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 10 23:24:06.174: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 10 23:24:06.176: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:371 STEP: Creating pod startup-42105928-f005-4890-8aac-0e0cd527e29c in namespace container-probe-5327 Jun 10 23:24:24.195: INFO: Started pod startup-42105928-f005-4890-8aac-0e0cd527e29c in namespace container-probe-5327 STEP: checking the pod's current state and verifying that restartCount is present Jun 10 23:24:24.198: INFO: Initial restart count of pod startup-42105928-f005-4890-8aac-0e0cd527e29c is 0 Jun 10 23:25:24.329: INFO: Restart count of pod container-probe-5327/startup-42105928-f005-4890-8aac-0e0cd527e29c is now 1 (1m0.131290198s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:25:24.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5327" for this suite. • [SLOW TEST:78.192 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:371 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted by liveness probe after startup probe enables it","total":-1,"completed":1,"skipped":62,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:25:24.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ssh STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:36 Jun 10 23:25:24.676: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:25:24.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ssh-2382" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.036 seconds] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should SSH to all nodes and run commands [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:42 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:25:19.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Jun 10 23:25:19.106: INFO: Waiting up to 5m0s for pod "security-context-7e39e5dd-fa51-4ddf-8ca5-139abd965fa2" in namespace "security-context-718" to be "Succeeded or Failed" Jun 10 23:25:19.108: INFO: Pod "security-context-7e39e5dd-fa51-4ddf-8ca5-139abd965fa2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117171ms Jun 10 23:25:21.111: INFO: Pod "security-context-7e39e5dd-fa51-4ddf-8ca5-139abd965fa2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005402571s Jun 10 23:25:23.116: INFO: Pod "security-context-7e39e5dd-fa51-4ddf-8ca5-139abd965fa2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009653796s Jun 10 23:25:25.121: INFO: Pod "security-context-7e39e5dd-fa51-4ddf-8ca5-139abd965fa2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015120828s STEP: Saw pod success Jun 10 23:25:25.121: INFO: Pod "security-context-7e39e5dd-fa51-4ddf-8ca5-139abd965fa2" satisfied condition "Succeeded or Failed" Jun 10 23:25:25.124: INFO: Trying to get logs from node node2 pod security-context-7e39e5dd-fa51-4ddf-8ca5-139abd965fa2 container test-container: STEP: delete the pod Jun 10 23:25:25.136: INFO: Waiting for pod security-context-7e39e5dd-fa51-4ddf-8ca5-139abd965fa2 to disappear Jun 10 23:25:25.138: INFO: Pod security-context-7e39e5dd-fa51-4ddf-8ca5-139abd965fa2 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:25:25.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-718" for this suite. • [SLOW TEST:6.073 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]","total":-1,"completed":7,"skipped":549,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:25:21.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:400 E0610 23:25:25.736627 30 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 207 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x654af00, 0x9c066c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86 panic(0x654af00, 0x9c066c0) /usr/local/go/src/runtime/panic.go:965 +0x1b9 k8s.io/kubernetes/test/e2e/framework/pod.podContainerStarted.func1(0xc002406f0c, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/resource.go:334 +0x17b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc003cbbf40, 0xc002406f00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitFor(0xc003a60f60, 0xc003cbbf40, 0xc0051b3c20, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:541 +0x128 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollInternal(0xc003a60f60, 0xc003cbbf40, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:427 +0x87 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc003a60f60, 0xc003cbbf40, 0xc003a60f60, 0xc003cbbf40) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:452 +0x74 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc003cbbf40, 0x14, 0xc0051b95f0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodContainerStarted(0x77b33d8, 0xc00525fa20, 0xc00505c1b0, 0x14, 0xc0051b95f0, 0x2c, 0x0, 0x45d964b800, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/wait.go:554 +0x92 k8s.io/kubernetes/test/e2e/common/node.glob..func2.18() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:417 +0x39e k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0008fe000, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0008fe000, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc0004333a0, 0x76a2fe0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc004455d10, 0x0, 0x76a2fe0, 0xc000190840) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:215 +0x72f k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc004455d10, 0x76a2fe0, 0xc000190840) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc004c8a000, 0xc004455d10, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc004c8a000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x147 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc004c8a000, 0xc00417e030) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000194280, 0x7fe85ade5d28, 0xc001b83080, 0x6f170c8, 0x14, 0xc002b24660, 0x3, 0x3, 0x7759478, 0xc000190840, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x76a80c0, 0xc001b83080, 0x6f170c8, 0x14, 0xc00121aa40, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x218 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x76a80c0, 0xc001b83080, 0x6f170c8, 0x14, 0xc00076ba40, 0x2, 0x2, 0x25) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001b83080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001b83080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001b83080, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "container-probe-8489". STEP: Found 4 events. Jun 10 23:25:25.740: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for startup-c968f7e2-2c2d-4ce6-bf39-080e73c75f95: { } Scheduled: Successfully assigned container-probe-8489/startup-c968f7e2-2c2d-4ce6-bf39-080e73c75f95 to node1 Jun 10 23:25:25.740: INFO: At 2022-06-10 23:25:25 +0000 UTC - event for startup-c968f7e2-2c2d-4ce6-bf39-080e73c75f95: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" Jun 10 23:25:25.740: INFO: At 2022-06-10 23:25:25 +0000 UTC - event for startup-c968f7e2-2c2d-4ce6-bf39-080e73c75f95: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" in 268.809124ms Jun 10 23:25:25.740: INFO: At 2022-06-10 23:25:25 +0000 UTC - event for startup-c968f7e2-2c2d-4ce6-bf39-080e73c75f95: {kubelet node1} Created: Created container busybox Jun 10 23:25:25.743: INFO: POD NODE PHASE GRACE CONDITIONS Jun 10 23:25:25.743: INFO: startup-c968f7e2-2c2d-4ce6-bf39-080e73c75f95 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 23:25:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 23:25:21 +0000 UTC ContainersNotReady containers with unready status: [busybox]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 23:25:21 +0000 UTC ContainersNotReady containers with unready status: [busybox]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 23:25:21 +0000 UTC }] Jun 10 23:25:25.743: INFO: Jun 10 23:25:25.749: INFO: Logging node info for node master1 Jun 10 23:25:25.751: INFO: Node Info: &Node{ObjectMeta:{master1 e472448e-87fd-4e8d-bbb7-98d43d3d8a87 78369 0 2022-06-10 19:57:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-10 19:57:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-10 20:00:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-10 20:05:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {nfd-master Update v1 2022-06-10 20:08:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:20 +0000 UTC,LastTransitionTime:2022-06-10 20:03:20 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 23:25:16 +0000 UTC,LastTransitionTime:2022-06-10 19:57:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 23:25:16 +0000 UTC,LastTransitionTime:2022-06-10 19:57:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 23:25:16 +0000 UTC,LastTransitionTime:2022-06-10 19:57:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 23:25:16 +0000 UTC,LastTransitionTime:2022-06-10 20:00:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3faca96dd267476388422e9ecfe8ffa5,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:a8563bde-8faa-4424-940f-741c59dd35bf,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bec743bd4fe4525edfd5f3c9bb11da21629092dfe60d396ce7f8168ac1088695 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 10 23:25:25.752: INFO: Logging kubelet events for node master1 Jun 10 23:25:25.754: INFO: Logging pods the kubelet thinks is on node master1 Jun 10 23:25:25.789: INFO: node-feature-discovery-controller-cff799f9f-74qhv started at 2022-06-10 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:25.789: INFO: Container nfd-controller ready: true, restart count 0 Jun 10 23:25:25.789: INFO: prometheus-operator-585ccfb458-kkb8f started at 2022-06-10 20:13:26 +0000 UTC (0+2 container statuses recorded) Jun 10 23:25:25.789: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 23:25:25.789: INFO: Container prometheus-operator ready: true, restart count 0 Jun 10 23:25:25.789: INFO: node-exporter-vc67r started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 10 23:25:25.789: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 23:25:25.789: INFO: Container node-exporter ready: true, restart count 0 Jun 10 23:25:25.789: INFO: kube-apiserver-master1 started at 2022-06-10 19:58:43 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:25.789: INFO: Container kube-apiserver ready: true, restart count 0 Jun 10 23:25:25.789: INFO: kube-controller-manager-master1 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:25.789: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 10 23:25:25.789: INFO: kube-scheduler-master1 started at 2022-06-10 19:58:43 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:25.789: INFO: Container kube-scheduler ready: true, restart count 0 Jun 10 23:25:25.789: INFO: kube-proxy-rd4j7 started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:25.789: INFO: Container kube-proxy ready: true, restart count 3 Jun 10 23:25:25.789: INFO: container-registry-65d7c44b96-rsh2n started at 2022-06-10 20:04:56 +0000 UTC (0+2 container statuses recorded) Jun 10 23:25:25.789: INFO: Container docker-registry ready: true, restart count 0 Jun 10 23:25:25.789: INFO: Container nginx ready: true, restart count 0 Jun 10 23:25:25.789: INFO: kube-flannel-xx9h7 started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 10 23:25:25.789: INFO: Init container install-cni ready: true, restart count 0 Jun 10 23:25:25.789: INFO: Container kube-flannel ready: true, restart count 1 Jun 10 23:25:25.790: INFO: kube-multus-ds-amd64-t5pr7 started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:25.790: INFO: Container kube-multus ready: true, restart count 1 Jun 10 23:25:25.790: INFO: dns-autoscaler-7df78bfcfb-kz7px started at 2022-06-10 20:00:58 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:25.790: INFO: Container autoscaler ready: true, restart count 1 Jun 10 23:25:25.901: INFO: Latency metrics for node master1 Jun 10 23:25:25.901: INFO: Logging node info for node master2 Jun 10 23:25:25.904: INFO: Node Info: &Node{ObjectMeta:{master2 66c7af40-c8de-462b-933d-792f10a44a43 78588 0 2022-06-10 19:58:07 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-10 19:58:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-10 20:10:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:20 +0000 UTC,LastTransitionTime:2022-06-10 20:03:20 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 23:25:25 +0000 UTC,LastTransitionTime:2022-06-10 19:58:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 23:25:25 +0000 UTC,LastTransitionTime:2022-06-10 19:58:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 23:25:25 +0000 UTC,LastTransitionTime:2022-06-10 19:58:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 23:25:25 +0000 UTC,LastTransitionTime:2022-06-10 20:00:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:31687d4b1abb46329a442e068ee56c42,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:e234d452-a6d8-4bf0-b98d-a080613c39e9,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 10 23:25:25.904: INFO: Logging kubelet events for node master2 Jun 10 23:25:25.906: INFO: Logging pods the kubelet thinks is on node master2 Jun 10 23:25:25.913: INFO: kube-multus-ds-amd64-nrmqq started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:25.914: INFO: Container kube-multus ready: true, restart count 1 Jun 10 23:25:25.914: INFO: coredns-8474476ff8-hlspd started at 2022-06-10 20:01:00 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:25.914: INFO: Container coredns ready: true, restart count 1 Jun 10 23:25:25.914: INFO: kube-controller-manager-master2 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:25.914: INFO: Container kube-controller-manager ready: true, restart count 1 Jun 10 23:25:25.914: INFO: kube-scheduler-master2 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:25.914: INFO: Container kube-scheduler ready: true, restart count 3 Jun 10 23:25:25.914: INFO: kube-proxy-2kbvc started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:25.914: INFO: Container kube-proxy ready: true, restart count 2 Jun 10 23:25:25.914: INFO: kube-flannel-ftn9l started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 10 23:25:25.914: INFO: Init container install-cni ready: true, restart count 2 Jun 10 23:25:25.914: INFO: Container kube-flannel ready: true, restart count 1 Jun 10 23:25:25.914: INFO: kube-apiserver-master2 started at 2022-06-10 19:58:44 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:25.914: INFO: Container kube-apiserver ready: true, restart count 0 Jun 10 23:25:25.914: INFO: node-exporter-6fbrb started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 10 23:25:25.914: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 23:25:25.914: INFO: Container node-exporter ready: true, restart count 0 Jun 10 23:25:25.986: INFO: Latency metrics for node master2 Jun 10 23:25:25.986: INFO: Logging node info for node master3 Jun 10 23:25:25.989: INFO: Node Info: &Node{ObjectMeta:{master3 e51505ec-e791-4bbe-aeb1-bd0671fd4464 78487 0 2022-06-10 19:58:16 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-10 19:58:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-10 20:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-10 20:10:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:14 +0000 UTC,LastTransitionTime:2022-06-10 20:03:14 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 23:25:21 +0000 UTC,LastTransitionTime:2022-06-10 19:58:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 23:25:21 +0000 UTC,LastTransitionTime:2022-06-10 19:58:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 23:25:21 +0000 UTC,LastTransitionTime:2022-06-10 19:58:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 23:25:21 +0000 UTC,LastTransitionTime:2022-06-10 20:00:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:1f373495c4c54f68a37fa0d50cd1da58,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:a719d949-f9d1-4ee4-a79b-ab3a929b7d00,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 10 23:25:25.989: INFO: Logging kubelet events for node master3 Jun 10 23:25:25.991: INFO: Logging pods the kubelet thinks is on node master3 Jun 10 23:25:26.000: INFO: kube-proxy-rm9n6 started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:26.000: INFO: Container kube-proxy ready: true, restart count 1 Jun 10 23:25:26.000: INFO: kube-flannel-jpd2j started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 10 23:25:26.000: INFO: Init container install-cni ready: true, restart count 2 Jun 10 23:25:26.000: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 23:25:26.000: INFO: kube-apiserver-master3 started at 2022-06-10 20:03:07 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:26.000: INFO: Container kube-apiserver ready: true, restart count 0 Jun 10 23:25:26.000: INFO: kube-controller-manager-master3 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:26.000: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 10 23:25:26.001: INFO: kube-scheduler-master3 started at 2022-06-10 20:03:07 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:26.001: INFO: Container kube-scheduler ready: true, restart count 1 Jun 10 23:25:26.001: INFO: kube-multus-ds-amd64-8b4tg started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:26.001: INFO: Container kube-multus ready: true, restart count 1 Jun 10 23:25:26.001: INFO: coredns-8474476ff8-s8q89 started at 2022-06-10 20:00:56 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:26.001: INFO: Container coredns ready: true, restart count 1 Jun 10 23:25:26.001: INFO: node-exporter-q4rw6 started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 10 23:25:26.001: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 23:25:26.001: INFO: Container node-exporter ready: true, restart count 0 Jun 10 23:25:26.076: INFO: Latency metrics for node master3 Jun 10 23:25:26.076: INFO: Logging node info for node node1 Jun 10 23:25:26.079: INFO: Node Info: &Node{ObjectMeta:{node1 fa951133-0317-499e-8a0a-fc7a0636a371 78515 0 2022-06-10 19:59:19 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-10 19:59:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-06-10 19:59:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-10 20:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-10 20:11:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-10 22:28:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:13 +0000 UTC,LastTransitionTime:2022-06-10 20:03:13 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 23:25:23 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 23:25:23 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 23:25:23 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 23:25:23 +0000 UTC,LastTransitionTime:2022-06-10 20:00:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:aabc551d0ffe4cb3b41c0db91649a9a2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:fea48af7-d08f-4093-b808-340d06faf38b,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003977815,},ContainerImage{Names:[localhost:30500/cmk@sha256:fa61e6e6fee0a4d296013d2993a9ff5538ff0b2e232e6b9c661a6604d93ce888 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:160595fccf5ad4e41cc0a7acf56027802bf1a2310e704f6505baf0f88746e277 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.7],SizeBytes:60182103,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:73408b8d6699bf382b8f7526b6d0a986fad0f037440cd9aabd8985a7e1dbea07 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bec743bd4fe4525edfd5f3c9bb11da21629092dfe60d396ce7f8168ac1088695 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 10 23:25:26.080: INFO: Logging kubelet events for node node1 Jun 10 23:25:26.083: INFO: Logging pods the kubelet thinks is on node node1 Jun 10 23:25:26.097: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-k4f5v started at 2022-06-10 20:09:21 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:26.097: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 10 23:25:26.097: INFO: node-exporter-tk8f9 started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 10 23:25:26.097: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 23:25:26.097: INFO: Container node-exporter ready: true, restart count 0 Jun 10 23:25:26.097: INFO: pod-submit-status-0-8 started at 2022-06-10 23:25:18 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:26.097: INFO: Container busybox ready: false, restart count 0 Jun 10 23:25:26.097: INFO: node-feature-discovery-worker-9xsdt started at 2022-06-10 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:26.098: INFO: Container nfd-worker ready: true, restart count 0 Jun 10 23:25:26.098: INFO: collectd-kpj5z started at 2022-06-10 20:17:30 +0000 UTC (0+3 container statuses recorded) Jun 10 23:25:26.098: INFO: Container collectd ready: true, restart count 0 Jun 10 23:25:26.098: INFO: Container collectd-exporter ready: true, restart count 0 Jun 10 23:25:26.098: INFO: Container rbac-proxy ready: true, restart count 0 Jun 10 23:25:26.098: INFO: cmk-webhook-6c9d5f8578-n9w8j started at 2022-06-10 20:12:30 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:26.098: INFO: Container cmk-webhook ready: true, restart count 0 Jun 10 23:25:26.098: INFO: startup-c968f7e2-2c2d-4ce6-bf39-080e73c75f95 started at 2022-06-10 23:25:21 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:26.098: INFO: Container busybox ready: false, restart count 0 Jun 10 23:25:26.098: INFO: termination-message-container188e1219-b844-415f-a82f-a4bcad0192d3 started at 2022-06-10 23:25:22 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:26.098: INFO: Container termination-message-container ready: false, restart count 0 Jun 10 23:25:26.098: INFO: liveness-http started at (0+0 container statuses recorded) Jun 10 23:25:26.098: INFO: cmk-qjrhs started at 2022-06-10 20:12:29 +0000 UTC (0+2 container statuses recorded) Jun 10 23:25:26.098: INFO: Container nodereport ready: true, restart count 0 Jun 10 23:25:26.098: INFO: Container reconcile ready: true, restart count 0 Jun 10 23:25:26.098: INFO: pod-prestop-hook-2065f2fc-8bd8-4e5b-9442-9ae997a8cd46 started at 2022-06-10 23:24:35 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:26.098: INFO: Container nginx ready: false, restart count 0 Jun 10 23:25:26.098: INFO: nginx-proxy-node1 started at 2022-06-10 19:59:19 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:26.098: INFO: Container nginx-proxy ready: true, restart count 2 Jun 10 23:25:26.098: INFO: kube-flannel-x926c started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 10 23:25:26.098: INFO: Init container install-cni ready: true, restart count 2 Jun 10 23:25:26.098: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 23:25:26.098: INFO: kube-multus-ds-amd64-4gckf started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:26.098: INFO: Container kube-multus ready: true, restart count 1 Jun 10 23:25:26.098: INFO: tas-telemetry-aware-scheduling-84ff454dfb-lb2mn started at 2022-06-10 20:16:40 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:26.098: INFO: Container tas-extender ready: true, restart count 0 Jun 10 23:25:26.098: INFO: kube-proxy-5bkrr started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:26.098: INFO: Container kube-proxy ready: true, restart count 1 Jun 10 23:25:26.098: INFO: pod-submit-status-1-8 started at 2022-06-10 23:25:21 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:26.098: INFO: Container busybox ready: false, restart count 0 Jun 10 23:25:26.098: INFO: prometheus-k8s-0 started at 2022-06-10 20:13:45 +0000 UTC (0+4 container statuses recorded) Jun 10 23:25:26.098: INFO: Container config-reloader ready: true, restart count 0 Jun 10 23:25:26.098: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 10 23:25:26.098: INFO: Container grafana ready: true, restart count 0 Jun 10 23:25:26.098: INFO: Container prometheus ready: true, restart count 1 Jun 10 23:25:26.098: INFO: cmk-init-discover-node1-hlbt6 started at 2022-06-10 20:11:42 +0000 UTC (0+3 container statuses recorded) Jun 10 23:25:26.098: INFO: Container discover ready: false, restart count 0 Jun 10 23:25:26.098: INFO: Container init ready: false, restart count 0 Jun 10 23:25:26.098: INFO: Container install ready: false, restart count 0 Jun 10 23:25:27.044: INFO: Latency metrics for node node1 Jun 10 23:25:27.044: INFO: Logging node info for node node2 Jun 10 23:25:27.047: INFO: Node Info: &Node{ObjectMeta:{node2 e3ba5b73-7a35-4d3f-9138-31db06c90dc3 78512 0 2022-06-10 19:59:19 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-10 19:59:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-06-10 19:59:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-10 20:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-10 20:12:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-10 22:28:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2022-06-10 23:24:06 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:16 +0000 UTC,LastTransitionTime:2022-06-10 20:03:16 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 23:25:23 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 23:25:23 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 23:25:23 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 23:25:23 +0000 UTC,LastTransitionTime:2022-06-10 20:00:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bb5fb4a83f9949939cd41b7583e9b343,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:bd9c2046-c9ae-4b83-a147-c07e3487254e,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:fa61e6e6fee0a4d296013d2993a9ff5538ff0b2e232e6b9c661a6604d93ce888 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:73408b8d6699bf382b8f7526b6d0a986fad0f037440cd9aabd8985a7e1dbea07 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:42321438,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 10 23:25:27.048: INFO: Logging kubelet events for node node2 Jun 10 23:25:27.050: INFO: Logging pods the kubelet thinks is on node node2 Jun 10 23:25:28.180: INFO: kubernetes-metrics-scraper-5558854cb-pf6tn started at 2022-06-10 20:01:01 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:28.180: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 10 23:25:28.180: INFO: startup-72c8ec1c-5cf0-417d-9440-f16765e6daf0 started at 2022-06-10 23:24:54 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:28.180: INFO: Container busybox ready: false, restart count 0 Jun 10 23:25:28.180: INFO: node-exporter-trpg7 started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 10 23:25:28.180: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 23:25:28.180: INFO: Container node-exporter ready: true, restart count 0 Jun 10 23:25:28.180: INFO: startup-override-398e1c8b-08d9-4883-9071-73161ff9b02c started at (0+0 container statuses recorded) Jun 10 23:25:28.180: INFO: pod-submit-status-2-5 started at 2022-06-10 23:25:19 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:28.180: INFO: Container busybox ready: false, restart count 0 Jun 10 23:25:28.180: INFO: liveness-exec started at 2022-06-10 23:25:25 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:28.180: INFO: Container liveness-exec ready: false, restart count 0 Jun 10 23:25:28.180: INFO: cmk-zpstc started at 2022-06-10 20:12:29 +0000 UTC (0+2 container statuses recorded) Jun 10 23:25:28.180: INFO: Container nodereport ready: true, restart count 0 Jun 10 23:25:28.180: INFO: Container reconcile ready: true, restart count 0 Jun 10 23:25:28.180: INFO: nginx-proxy-node2 started at 2022-06-10 19:59:19 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:28.180: INFO: Container nginx-proxy ready: true, restart count 2 Jun 10 23:25:28.180: INFO: kube-multus-ds-amd64-nj866 started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:28.180: INFO: Container kube-multus ready: true, restart count 1 Jun 10 23:25:28.180: INFO: kubernetes-dashboard-785dcbb76d-7pmgn started at 2022-06-10 20:01:00 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:28.180: INFO: Container kubernetes-dashboard ready: true, restart count 1 Jun 10 23:25:28.180: INFO: node-feature-discovery-worker-s9mwk started at 2022-06-10 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:28.180: INFO: Container nfd-worker ready: true, restart count 0 Jun 10 23:25:28.180: INFO: startup-2598881d-38e2-48b2-8488-835bd91fb913 started at 2022-06-10 23:24:44 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:28.180: INFO: Container busybox ready: false, restart count 0 Jun 10 23:25:28.180: INFO: kube-proxy-4clxz started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:28.180: INFO: Container kube-proxy ready: true, restart count 2 Jun 10 23:25:28.180: INFO: kube-flannel-8jl6m started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 10 23:25:28.180: INFO: Init container install-cni ready: true, restart count 2 Jun 10 23:25:28.180: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 23:25:28.181: INFO: cmk-init-discover-node2-jxvbr started at 2022-06-10 20:12:04 +0000 UTC (0+3 container statuses recorded) Jun 10 23:25:28.181: INFO: Container discover ready: false, restart count 0 Jun 10 23:25:28.181: INFO: Container init ready: false, restart count 0 Jun 10 23:25:28.181: INFO: Container install ready: false, restart count 0 Jun 10 23:25:28.181: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-z4m46 started at 2022-06-10 20:09:21 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:28.181: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 10 23:25:28.181: INFO: busybox-26df47a9-362e-44b5-ba44-a9633ca91729 started at 2022-06-10 23:25:04 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:28.181: INFO: Container busybox ready: false, restart count 0 Jun 10 23:25:28.181: INFO: liveness-6088ecd0-0f2d-4536-80de-4cb833ff562c started at 2022-06-10 23:24:06 +0000 UTC (0+1 container statuses recorded) Jun 10 23:25:28.181: INFO: Container agnhost-container ready: true, restart count 0 Jun 10 23:25:28.181: INFO: collectd-srmjh started at 2022-06-10 20:17:30 +0000 UTC (0+3 container statuses recorded) Jun 10 23:25:28.181: INFO: Container collectd ready: true, restart count 0 Jun 10 23:25:28.181: INFO: Container collectd-exporter ready: true, restart count 0 Jun 10 23:25:28.181: INFO: Container rbac-proxy ready: true, restart count 0 Jun 10 23:25:30.023: INFO: Latency metrics for node node2 Jun 10 23:25:30.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8489" for this suite. •! Panic [8.336 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be ready immediately after startupProbe succeeds [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:400 Test Panicked runtime error: invalid memory address or nil pointer dereference /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 Full Stack Trace k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x109 panic(0x654af00, 0x9c066c0) /usr/local/go/src/runtime/panic.go:965 +0x1b9 k8s.io/kubernetes/test/e2e/framework/pod.podContainerStarted.func1(0xc002406f0c, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/resource.go:334 +0x17b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc003cbbf40, 0xc002406f00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitFor(0xc003a60f60, 0xc003cbbf40, 0xc0051b3c20, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:541 +0x128 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollInternal(0xc003a60f60, 0xc003cbbf40, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:427 +0x87 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc003a60f60, 0xc003cbbf40, 0xc003a60f60, 0xc003cbbf40) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:452 +0x74 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc003cbbf40, 0x14, 0xc0051b95f0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodContainerStarted(0x77b33d8, 0xc00525fa20, 0xc00505c1b0, 0x14, 0xc0051b95f0, 0x2c, 0x0, 0x45d964b800, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/wait.go:554 +0x92 k8s.io/kubernetes/test/e2e/common/node.glob..func2.18() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:417 +0x39e k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001b83080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001b83080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001b83080, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 ------------------------------ SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:25:21.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 10 23:25:30.075: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:25:30.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-587" for this suite. • [SLOW TEST:8.109 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":4,"skipped":355,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:25:30.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 Jun 10 23:25:30.101: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-333" to be "Succeeded or Failed" Jun 10 23:25:30.103: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.594309ms Jun 10 23:25:32.108: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007576754s Jun 10 23:25:34.113: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012583629s Jun 10 23:25:34.113: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:25:34.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-333" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":8,"skipped":745,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:25:30.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777 STEP: submitting the pod to kubernetes STEP: patching pod status with condition "k8s.io/test-condition1" to true STEP: patching pod status with condition "k8s.io/test-condition2" to true STEP: patching pod status with condition "k8s.io/test-condition1" to false Jun 10 23:25:48.282: INFO: Expect the Ready condition of pod "pod-ready" to be false, but got true [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:25:49.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4103" for this suite. • [SLOW TEST:19.076 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777 ------------------------------ {"msg":"PASSED [sig-node] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":5,"skipped":413,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:25:25.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:477 STEP: Creating pod startup-override-398e1c8b-08d9-4883-9071-73161ff9b02c in namespace container-probe-7366 Jun 10 23:25:33.314: INFO: Started pod startup-override-398e1c8b-08d9-4883-9071-73161ff9b02c in namespace container-probe-7366 STEP: checking the pod's current state and verifying that restartCount is present Jun 10 23:25:33.316: INFO: Initial restart count of pod startup-override-398e1c8b-08d9-4883-9071-73161ff9b02c is 1 Jun 10 23:25:55.379: INFO: Restart count of pod container-probe-7366/startup-override-398e1c8b-08d9-4883-9071-73161ff9b02c is now 2 (22.062613381s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:25:55.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7366" for this suite. • [SLOW TEST:30.122 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:477 ------------------------------ {"msg":"PASSED [sig-node] Probing container should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod]","total":-1,"completed":8,"skipped":623,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:25:55.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 Jun 10 23:25:55.774: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-0e6979b6-3c81-49c3-802d-3db6e62c4af8" in namespace "security-context-test-1194" to be "Succeeded or Failed" Jun 10 23:25:55.776: INFO: Pod "alpine-nnp-nil-0e6979b6-3c81-49c3-802d-3db6e62c4af8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.519665ms Jun 10 23:25:57.779: INFO: Pod "alpine-nnp-nil-0e6979b6-3c81-49c3-802d-3db6e62c4af8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005267292s Jun 10 23:25:59.784: INFO: Pod "alpine-nnp-nil-0e6979b6-3c81-49c3-802d-3db6e62c4af8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009723654s Jun 10 23:26:01.788: INFO: Pod "alpine-nnp-nil-0e6979b6-3c81-49c3-802d-3db6e62c4af8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01359821s Jun 10 23:26:01.788: INFO: Pod "alpine-nnp-nil-0e6979b6-3c81-49c3-802d-3db6e62c4af8" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:26:01.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1194" for this suite. • [SLOW TEST:6.065 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":9,"skipped":803,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:26:02.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367 Jun 10 23:26:02.034: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-cf83fd63-153f-42e2-b289-b30f1d8ba9e0" in namespace "security-context-test-592" to be "Succeeded or Failed" Jun 10 23:26:02.037: INFO: Pod "alpine-nnp-true-cf83fd63-153f-42e2-b289-b30f1d8ba9e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.934262ms Jun 10 23:26:04.043: INFO: Pod "alpine-nnp-true-cf83fd63-153f-42e2-b289-b30f1d8ba9e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008182912s Jun 10 23:26:06.046: INFO: Pod "alpine-nnp-true-cf83fd63-153f-42e2-b289-b30f1d8ba9e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01205566s Jun 10 23:26:06.046: INFO: Pod "alpine-nnp-true-cf83fd63-153f-42e2-b289-b30f1d8ba9e0" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:26:06.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-592" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":10,"skipped":906,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:24:44.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:313 STEP: Creating pod startup-2598881d-38e2-48b2-8488-835bd91fb913 in namespace container-probe-9548 Jun 10 23:24:58.715: INFO: Started pod startup-2598881d-38e2-48b2-8488-835bd91fb913 in namespace container-probe-9548 STEP: checking the pod's current state and verifying that restartCount is present Jun 10 23:24:58.717: INFO: Initial restart count of pod startup-2598881d-38e2-48b2-8488-835bd91fb913 is 0 Jun 10 23:26:06.888: INFO: Restart count of pod container-probe-9548/startup-2598881d-38e2-48b2-8488-835bd91fb913 is now 1 (1m8.171265254s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:26:06.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9548" for this suite. • [SLOW TEST:82.235 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:313 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted startup probe fails","total":-1,"completed":6,"skipped":1316,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:26:07.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 Jun 10 23:26:07.039: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:26:07.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-3327" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.036 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 should enforce an AppArmor profile [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:43 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:284 ------------------------------ SSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:26:06.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:159 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:26:08.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8397" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":11,"skipped":974,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:25:49.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:53 [It] should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jun 10 23:25:58.527: INFO: start=2022-06-10 23:25:53.494999563 +0000 UTC m=+109.354523078, now=2022-06-10 23:25:58.527115964 +0000 UTC m=+114.386639610, kubelet pod: {"metadata":{"name":"pod-submit-remove-348315c6-749c-4b95-932c-0ad1c0f0684a","namespace":"pods-3931","uid":"1f57d77b-a747-4486-9b2d-2cb74778f0b9","resourceVersion":"78955","creationTimestamp":"2022-06-10T23:25:49Z","deletionTimestamp":"2022-06-10T23:26:23Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"459013592"},"annotations":{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.189\"\n ],\n \"mac\": \"e2:44:73:ac:f1:9f\",\n \"default\": true,\n \"dns\": {}\n}]","k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.189\"\n ],\n \"mac\": \"e2:44:73:ac:f1:9f\",\n \"default\": true,\n \"dns\": {}\n}]","kubernetes.io/config.seen":"2022-06-10T23:25:49.473971253Z","kubernetes.io/config.source":"api","kubernetes.io/psp":"collectd"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2022-06-10T23:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"kube-api-access-bqp96","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","args":["pause"],"resources":{},"volumeMounts":[{"name":"kube-api-access-bqp96","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"node2","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-06-10T23:25:49Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2022-06-10T23:25:55Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2022-06-10T23:25:55Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-06-10T23:25:49Z"}],"hostIP":"10.10.190.208","podIP":"10.244.4.189","podIPs":[{"ip":"10.244.4.189"}],"startTime":"2022-06-10T23:25:49Z","containerStatuses":[{"name":"agnhost-container","state":{"terminated":{"exitCode":2,"reason":"Error","startedAt":"2022-06-10T23:25:51Z","finishedAt":"2022-06-10T23:25:54Z","containerID":"docker://ae1be562a11235682b14b69ae573557593ffc2cbeec711641197bf82b77dca9b"}},"lastState":{},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","imageID":"docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1","containerID":"docker://ae1be562a11235682b14b69ae573557593ffc2cbeec711641197bf82b77dca9b","started":false}],"qosClass":"BestEffort"}} Jun 10 23:26:03.511: INFO: start=2022-06-10 23:25:53.494999563 +0000 UTC m=+109.354523078, now=2022-06-10 23:26:03.511373527 +0000 UTC m=+119.370897150, kubelet pod: {"metadata":{"name":"pod-submit-remove-348315c6-749c-4b95-932c-0ad1c0f0684a","namespace":"pods-3931","uid":"1f57d77b-a747-4486-9b2d-2cb74778f0b9","resourceVersion":"78955","creationTimestamp":"2022-06-10T23:25:49Z","deletionTimestamp":"2022-06-10T23:26:23Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"459013592"},"annotations":{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.189\"\n ],\n \"mac\": \"e2:44:73:ac:f1:9f\",\n \"default\": true,\n \"dns\": {}\n}]","k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.189\"\n ],\n \"mac\": \"e2:44:73:ac:f1:9f\",\n \"default\": true,\n \"dns\": {}\n}]","kubernetes.io/config.seen":"2022-06-10T23:25:49.473971253Z","kubernetes.io/config.source":"api","kubernetes.io/psp":"collectd"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2022-06-10T23:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"kube-api-access-bqp96","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","args":["pause"],"resources":{},"volumeMounts":[{"name":"kube-api-access-bqp96","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"node2","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-06-10T23:25:49Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2022-06-10T23:25:55Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2022-06-10T23:25:55Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-06-10T23:25:49Z"}],"hostIP":"10.10.190.208","podIP":"10.244.4.189","podIPs":[{"ip":"10.244.4.189"}],"startTime":"2022-06-10T23:25:49Z","containerStatuses":[{"name":"agnhost-container","state":{"terminated":{"exitCode":2,"reason":"Error","startedAt":"2022-06-10T23:25:51Z","finishedAt":"2022-06-10T23:25:54Z","containerID":"docker://ae1be562a11235682b14b69ae573557593ffc2cbeec711641197bf82b77dca9b"}},"lastState":{},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","imageID":"docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1","containerID":"docker://ae1be562a11235682b14b69ae573557593ffc2cbeec711641197bf82b77dca9b","started":false}],"qosClass":"BestEffort"}} Jun 10 23:26:08.585: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:26:08.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3931" for this suite. • [SLOW TEST:19.164 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:51 should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Delete Grace Period should be submitted and removed","total":-1,"completed":6,"skipped":480,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:25:04.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237 STEP: Creating pod busybox-26df47a9-362e-44b5-ba44-a9633ca91729 in namespace container-probe-9744 Jun 10 23:25:14.780: INFO: Started pod busybox-26df47a9-362e-44b5-ba44-a9633ca91729 in namespace container-probe-9744 Jun 10 23:25:14.780: INFO: pod container-probe-9744/busybox-26df47a9-362e-44b5-ba44-a9633ca91729 is not ready (1.247µs elapsed) Jun 10 23:25:16.781: INFO: pod container-probe-9744/busybox-26df47a9-362e-44b5-ba44-a9633ca91729 is not ready (2.00147226s elapsed) Jun 10 23:25:18.783: INFO: pod container-probe-9744/busybox-26df47a9-362e-44b5-ba44-a9633ca91729 is not ready (4.002855429s elapsed) Jun 10 23:25:20.784: INFO: pod container-probe-9744/busybox-26df47a9-362e-44b5-ba44-a9633ca91729 is not ready (6.004502054s elapsed) Jun 10 23:25:22.786: INFO: pod container-probe-9744/busybox-26df47a9-362e-44b5-ba44-a9633ca91729 is not ready (8.006389937s elapsed) Jun 10 23:25:24.786: INFO: pod container-probe-9744/busybox-26df47a9-362e-44b5-ba44-a9633ca91729 is not ready (10.006598854s elapsed) Jun 10 23:25:26.787: INFO: pod container-probe-9744/busybox-26df47a9-362e-44b5-ba44-a9633ca91729 is not ready (12.006981874s elapsed) Jun 10 23:25:28.788: INFO: pod container-probe-9744/busybox-26df47a9-362e-44b5-ba44-a9633ca91729 is not ready (14.008224937s elapsed) Jun 10 23:25:30.790: INFO: pod container-probe-9744/busybox-26df47a9-362e-44b5-ba44-a9633ca91729 is not ready (16.010240775s elapsed) Jun 10 23:25:32.792: INFO: pod container-probe-9744/busybox-26df47a9-362e-44b5-ba44-a9633ca91729 is not ready (18.012609793s elapsed) Jun 10 23:25:34.793: INFO: pod container-probe-9744/busybox-26df47a9-362e-44b5-ba44-a9633ca91729 is not ready (20.013653351s elapsed) Jun 10 23:25:36.795: INFO: pod container-probe-9744/busybox-26df47a9-362e-44b5-ba44-a9633ca91729 is not ready (22.015801925s elapsed) Jun 10 23:25:38.798: INFO: pod container-probe-9744/busybox-26df47a9-362e-44b5-ba44-a9633ca91729 is not ready (24.018167641s elapsed) Jun 10 23:25:40.798: INFO: pod container-probe-9744/busybox-26df47a9-362e-44b5-ba44-a9633ca91729 is not ready (26.018511149s elapsed) Jun 10 23:25:42.804: INFO: pod container-probe-9744/busybox-26df47a9-362e-44b5-ba44-a9633ca91729 is not ready (28.023967509s elapsed) Jun 10 23:25:44.804: INFO: pod container-probe-9744/busybox-26df47a9-362e-44b5-ba44-a9633ca91729 is not ready (30.024565829s elapsed) Jun 10 23:25:46.806: INFO: pod container-probe-9744/busybox-26df47a9-362e-44b5-ba44-a9633ca91729 is not ready (32.026657848s elapsed) Jun 10 23:25:48.807: INFO: pod container-probe-9744/busybox-26df47a9-362e-44b5-ba44-a9633ca91729 is not ready (34.027794599s elapsed) Jun 10 23:25:50.808: INFO: pod container-probe-9744/busybox-26df47a9-362e-44b5-ba44-a9633ca91729 is not ready (36.027924374s elapsed) Jun 10 23:25:52.812: INFO: pod container-probe-9744/busybox-26df47a9-362e-44b5-ba44-a9633ca91729 is not ready (38.031858486s elapsed) Jun 10 23:25:54.813: INFO: pod container-probe-9744/busybox-26df47a9-362e-44b5-ba44-a9633ca91729 is not ready (40.0335001s elapsed) Jun 10 23:25:56.816: INFO: pod container-probe-9744/busybox-26df47a9-362e-44b5-ba44-a9633ca91729 is not ready (42.036650822s elapsed) Jun 10 23:25:58.818: INFO: pod container-probe-9744/busybox-26df47a9-362e-44b5-ba44-a9633ca91729 is not ready (44.038609088s elapsed) Jun 10 23:26:00.819: INFO: pod container-probe-9744/busybox-26df47a9-362e-44b5-ba44-a9633ca91729 is not ready (46.03939775s elapsed) Jun 10 23:26:02.822: INFO: pod container-probe-9744/busybox-26df47a9-362e-44b5-ba44-a9633ca91729 is not ready (48.04247815s elapsed) Jun 10 23:26:04.824: INFO: pod container-probe-9744/busybox-26df47a9-362e-44b5-ba44-a9633ca91729 is not ready (50.043942605s elapsed) Jun 10 23:26:06.824: INFO: pod container-probe-9744/busybox-26df47a9-362e-44b5-ba44-a9633ca91729 is not ready (52.044623231s elapsed) Jun 10 23:26:08.825: INFO: pod container-probe-9744/busybox-26df47a9-362e-44b5-ba44-a9633ca91729 is not ready (54.045519346s elapsed) Jun 10 23:26:10.826: INFO: pod container-probe-9744/busybox-26df47a9-362e-44b5-ba44-a9633ca91729 is not ready (56.046647028s elapsed) Jun 10 23:26:12.829: INFO: pod container-probe-9744/busybox-26df47a9-362e-44b5-ba44-a9633ca91729 is not ready (58.049760271s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:26:14.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9744" for this suite. • [SLOW TEST:70.106 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237 ------------------------------ {"msg":"PASSED [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":4,"skipped":216,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:26:08.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:26:15.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2042" for this suite. • [SLOW TEST:6.079 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":7,"skipped":673,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:26:07.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Jun 10 23:26:07.102: INFO: Waiting up to 5m0s for pod "security-context-7c6c6aa0-3717-478c-bba2-bf5334bd15f1" in namespace "security-context-4266" to be "Succeeded or Failed" Jun 10 23:26:07.104: INFO: Pod "security-context-7c6c6aa0-3717-478c-bba2-bf5334bd15f1": Phase="Pending", Reason="", readiness=false. Elapsed: 1.96089ms Jun 10 23:26:09.109: INFO: Pod "security-context-7c6c6aa0-3717-478c-bba2-bf5334bd15f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006246839s Jun 10 23:26:11.116: INFO: Pod "security-context-7c6c6aa0-3717-478c-bba2-bf5334bd15f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013944617s Jun 10 23:26:13.124: INFO: Pod "security-context-7c6c6aa0-3717-478c-bba2-bf5334bd15f1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.021994644s Jun 10 23:26:15.128: INFO: Pod "security-context-7c6c6aa0-3717-478c-bba2-bf5334bd15f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.025316984s STEP: Saw pod success Jun 10 23:26:15.128: INFO: Pod "security-context-7c6c6aa0-3717-478c-bba2-bf5334bd15f1" satisfied condition "Succeeded or Failed" Jun 10 23:26:15.130: INFO: Trying to get logs from node node2 pod security-context-7c6c6aa0-3717-478c-bba2-bf5334bd15f1 container test-container: STEP: delete the pod Jun 10 23:26:15.259: INFO: Waiting for pod security-context-7c6c6aa0-3717-478c-bba2-bf5334bd15f1 to disappear Jun 10 23:26:15.262: INFO: Pod security-context-7c6c6aa0-3717-478c-bba2-bf5334bd15f1 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:26:15.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-4266" for this suite. • [SLOW TEST:8.199 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]","total":-1,"completed":7,"skipped":1377,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:26:15.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:26:20.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3487" for this suite. • [SLOW TEST:5.069 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":8,"skipped":692,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Jun 10 23:26:20.243: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:26:15.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 Jun 10 23:26:15.509: INFO: Waiting up to 5m0s for pod "busybox-user-0-5bec068e-2fe9-46e1-90b0-92c90d4b55c0" in namespace "security-context-test-3759" to be "Succeeded or Failed" Jun 10 23:26:15.511: INFO: Pod "busybox-user-0-5bec068e-2fe9-46e1-90b0-92c90d4b55c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.694994ms Jun 10 23:26:17.515: INFO: Pod "busybox-user-0-5bec068e-2fe9-46e1-90b0-92c90d4b55c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006226305s Jun 10 23:26:19.520: INFO: Pod "busybox-user-0-5bec068e-2fe9-46e1-90b0-92c90d4b55c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010816959s Jun 10 23:26:21.523: INFO: Pod "busybox-user-0-5bec068e-2fe9-46e1-90b0-92c90d4b55c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014788092s Jun 10 23:26:21.524: INFO: Pod "busybox-user-0-5bec068e-2fe9-46e1-90b0-92c90d4b55c0" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:26:21.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3759" for this suite. • [SLOW TEST:6.057 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsUser /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50 should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":8,"skipped":1480,"failed":0} Jun 10 23:26:21.533: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:26:14.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should run a Pod requesting a RuntimeClass with scheduling without taints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:125 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a label on the found node. STEP: verifying the node has the label foo-92b50719-5d7e-4d68-967b-7a3bc181e8f5 bar STEP: verifying the node has the label fizz-a2ad00dd-9c04-44f4-9d81-fc5e16ae270b buzz STEP: Trying to create runtimeclass and pod STEP: removing the label fizz-a2ad00dd-9c04-44f4-9d81-fc5e16ae270b off the node node2 STEP: verifying the node doesn't have the label fizz-a2ad00dd-9c04-44f4-9d81-fc5e16ae270b STEP: removing the label foo-92b50719-5d7e-4d68-967b-7a3bc181e8f5 off the node node2 STEP: verifying the node doesn't have the label foo-92b50719-5d7e-4d68-967b-7a3bc181e8f5 [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:26:25.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-853" for this suite. • [SLOW TEST:10.128 seconds] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should run a Pod requesting a RuntimeClass with scheduling without taints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:125 ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling without taints ","total":-1,"completed":5,"skipped":258,"failed":0} Jun 10 23:26:25.065: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:25:24.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Jun 10 23:25:24.750: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 Jun 10 23:25:24.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-3646 create -f -' Jun 10 23:25:25.179: INFO: stderr: "" Jun 10 23:25:25.179: INFO: stdout: "pod/liveness-exec created\n" Jun 10 23:25:25.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-3646 create -f -' Jun 10 23:25:25.529: INFO: stderr: "" Jun 10 23:25:25.529: INFO: stdout: "pod/liveness-http created\n" STEP: Check restarts Jun 10 23:25:31.539: INFO: Pod: liveness-exec, restart count:0 Jun 10 23:25:33.539: INFO: Pod: liveness-http, restart count:0 Jun 10 23:25:33.542: INFO: Pod: liveness-exec, restart count:0 Jun 10 23:25:35.544: INFO: Pod: liveness-http, restart count:0 Jun 10 23:25:35.545: INFO: Pod: liveness-exec, restart count:0 Jun 10 23:25:37.549: INFO: Pod: liveness-exec, restart count:0 Jun 10 23:25:37.549: INFO: Pod: liveness-http, restart count:0 Jun 10 23:25:39.553: INFO: Pod: liveness-exec, restart count:0 Jun 10 23:25:39.553: INFO: Pod: liveness-http, restart count:0 Jun 10 23:25:41.557: INFO: Pod: liveness-http, restart count:0 Jun 10 23:25:41.558: INFO: Pod: liveness-exec, restart count:0 Jun 10 23:25:43.565: INFO: Pod: liveness-http, restart count:0 Jun 10 23:25:43.565: INFO: Pod: liveness-exec, restart count:0 Jun 10 23:25:45.570: INFO: Pod: liveness-exec, restart count:0 Jun 10 23:25:45.570: INFO: Pod: liveness-http, restart count:0 Jun 10 23:25:47.576: INFO: Pod: liveness-exec, restart count:0 Jun 10 23:25:47.576: INFO: Pod: liveness-http, restart count:0 Jun 10 23:25:49.580: INFO: Pod: liveness-exec, restart count:0 Jun 10 23:25:49.580: INFO: Pod: liveness-http, restart count:0 Jun 10 23:25:51.584: INFO: Pod: liveness-http, restart count:0 Jun 10 23:25:51.584: INFO: Pod: liveness-exec, restart count:0 Jun 10 23:25:53.588: INFO: Pod: liveness-http, restart count:0 Jun 10 23:25:53.588: INFO: Pod: liveness-exec, restart count:0 Jun 10 23:25:55.591: INFO: Pod: liveness-http, restart count:0 Jun 10 23:25:55.591: INFO: Pod: liveness-exec, restart count:0 Jun 10 23:25:57.597: INFO: Pod: liveness-exec, restart count:0 Jun 10 23:25:57.597: INFO: Pod: liveness-http, restart count:0 Jun 10 23:25:59.601: INFO: Pod: liveness-http, restart count:0 Jun 10 23:25:59.601: INFO: Pod: liveness-exec, restart count:0 Jun 10 23:26:01.604: INFO: Pod: liveness-exec, restart count:0 Jun 10 23:26:01.604: INFO: Pod: liveness-http, restart count:0 Jun 10 23:26:03.609: INFO: Pod: liveness-exec, restart count:0 Jun 10 23:26:03.609: INFO: Pod: liveness-http, restart count:0 Jun 10 23:26:05.614: INFO: Pod: liveness-http, restart count:0 Jun 10 23:26:05.615: INFO: Pod: liveness-exec, restart count:0 Jun 10 23:26:07.620: INFO: Pod: liveness-http, restart count:0 Jun 10 23:26:07.620: INFO: Pod: liveness-exec, restart count:0 Jun 10 23:26:09.624: INFO: Pod: liveness-exec, restart count:0 Jun 10 23:26:09.624: INFO: Pod: liveness-http, restart count:1 Jun 10 23:26:09.624: INFO: Saw liveness-http restart, succeeded... Jun 10 23:26:11.628: INFO: Pod: liveness-exec, restart count:0 Jun 10 23:26:13.631: INFO: Pod: liveness-exec, restart count:0 Jun 10 23:26:15.635: INFO: Pod: liveness-exec, restart count:0 Jun 10 23:26:17.640: INFO: Pod: liveness-exec, restart count:0 Jun 10 23:26:19.644: INFO: Pod: liveness-exec, restart count:0 Jun 10 23:26:21.647: INFO: Pod: liveness-exec, restart count:0 Jun 10 23:26:23.654: INFO: Pod: liveness-exec, restart count:0 Jun 10 23:26:25.657: INFO: Pod: liveness-exec, restart count:0 Jun 10 23:26:27.665: INFO: Pod: liveness-exec, restart count:0 Jun 10 23:26:29.668: INFO: Pod: liveness-exec, restart count:0 Jun 10 23:26:31.673: INFO: Pod: liveness-exec, restart count:0 Jun 10 23:26:33.677: INFO: Pod: liveness-exec, restart count:0 Jun 10 23:26:35.685: INFO: Pod: liveness-exec, restart count:0 Jun 10 23:26:37.691: INFO: Pod: liveness-exec, restart count:1 Jun 10 23:26:37.691: INFO: Saw liveness-exec restart, succeeded... [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:26:37.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-3646" for this suite. • [SLOW TEST:72.979 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Liveness /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:66 liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Liveness liveness pods should be automatically restarted","total":-1,"completed":2,"skipped":238,"failed":0} Jun 10 23:26:37.702: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:24:30.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:202 [It] should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 STEP: creating pods that should always exit 1 and terminating the pod after a random delay Jun 10 23:24:35.589: INFO: watch delete seen for pod-submit-status-0-0 Jun 10 23:24:35.589: INFO: Pod pod-submit-status-0-0 on node node2 timings total=5.206877759s t=335ms run=0s execute=0s Jun 10 23:24:36.508: INFO: watch delete seen for pod-submit-status-1-0 Jun 10 23:24:36.508: INFO: Pod pod-submit-status-1-0 on node node2 timings total=6.125898795s t=96ms run=0s execute=0s Jun 10 23:24:42.789: INFO: watch delete seen for pod-submit-status-1-1 Jun 10 23:24:42.789: INFO: Pod pod-submit-status-1-1 on node node2 timings total=6.280458204s t=318ms run=0s execute=0s Jun 10 23:24:49.504: INFO: watch delete seen for pod-submit-status-2-0 Jun 10 23:24:49.504: INFO: Pod pod-submit-status-2-0 on node node2 timings total=19.122055184s t=1.99s run=0s execute=0s Jun 10 23:24:49.789: INFO: watch delete seen for pod-submit-status-0-1 Jun 10 23:24:49.789: INFO: Pod pod-submit-status-0-1 on node node2 timings total=14.200012325s t=1.074s run=0s execute=0s Jun 10 23:24:53.414: INFO: watch delete seen for pod-submit-status-0-2 Jun 10 23:24:53.414: INFO: Pod pod-submit-status-0-2 on node node1 timings total=3.624698086s t=18ms run=0s execute=0s Jun 10 23:24:54.188: INFO: watch delete seen for pod-submit-status-1-2 Jun 10 23:24:54.189: INFO: Pod pod-submit-status-1-2 on node node2 timings total=11.399689172s t=76ms run=0s execute=0s Jun 10 23:24:55.398: INFO: watch delete seen for pod-submit-status-2-1 Jun 10 23:24:55.398: INFO: Pod pod-submit-status-2-1 on node node1 timings total=5.893939251s t=1.195s run=0s execute=0s Jun 10 23:25:00.389: INFO: watch delete seen for pod-submit-status-0-3 Jun 10 23:25:00.389: INFO: Pod pod-submit-status-0-3 on node node2 timings total=6.974551385s t=1.125s run=0s execute=0s Jun 10 23:25:01.990: INFO: watch delete seen for pod-submit-status-1-3 Jun 10 23:25:01.990: INFO: Pod pod-submit-status-1-3 on node node2 timings total=7.801680368s t=555ms run=0s execute=0s Jun 10 23:25:04.200: INFO: watch delete seen for pod-submit-status-0-4 Jun 10 23:25:04.200: INFO: Pod pod-submit-status-0-4 on node node2 timings total=3.811617629s t=395ms run=0s execute=0s Jun 10 23:25:05.989: INFO: watch delete seen for pod-submit-status-2-2 Jun 10 23:25:05.990: INFO: Pod pod-submit-status-2-2 on node node2 timings total=10.591182523s t=1.354s run=0s execute=0s Jun 10 23:25:09.388: INFO: watch delete seen for pod-submit-status-1-4 Jun 10 23:25:09.389: INFO: Pod pod-submit-status-1-4 on node node2 timings total=7.398208075s t=449ms run=0s execute=0s Jun 10 23:25:10.789: INFO: watch delete seen for pod-submit-status-0-5 Jun 10 23:25:10.789: INFO: Pod pod-submit-status-0-5 on node node2 timings total=6.588254017s t=312ms run=0s execute=0s Jun 10 23:25:12.988: INFO: watch delete seen for pod-submit-status-1-5 Jun 10 23:25:12.988: INFO: Pod pod-submit-status-1-5 on node node2 timings total=3.599665337s t=73ms run=0s execute=0s Jun 10 23:25:14.190: INFO: watch delete seen for pod-submit-status-2-3 Jun 10 23:25:14.190: INFO: Pod pod-submit-status-2-3 on node node2 timings total=8.200727947s t=1.724s run=0s execute=0s Jun 10 23:25:15.389: INFO: watch delete seen for pod-submit-status-0-6 Jun 10 23:25:15.389: INFO: Pod pod-submit-status-0-6 on node node2 timings total=4.60033431s t=910ms run=0s execute=0s Jun 10 23:25:18.188: INFO: watch delete seen for pod-submit-status-0-7 Jun 10 23:25:18.188: INFO: Pod pod-submit-status-0-7 on node node2 timings total=2.799311535s t=225ms run=0s execute=0s Jun 10 23:25:18.789: INFO: watch delete seen for pod-submit-status-1-6 Jun 10 23:25:18.789: INFO: Pod pod-submit-status-1-6 on node node2 timings total=5.800737439s t=601ms run=0s execute=0s Jun 10 23:25:19.589: INFO: watch delete seen for pod-submit-status-2-4 Jun 10 23:25:19.590: INFO: Pod pod-submit-status-2-4 on node node2 timings total=5.399089904s t=499ms run=0s execute=0s Jun 10 23:25:21.423: INFO: watch delete seen for pod-submit-status-1-7 Jun 10 23:25:21.423: INFO: Pod pod-submit-status-1-7 on node node1 timings total=2.634270312s t=705ms run=0s execute=0s Jun 10 23:25:28.216: INFO: watch delete seen for pod-submit-status-2-5 Jun 10 23:25:28.216: INFO: Pod pod-submit-status-2-5 on node node2 timings total=8.626176775s t=1.901s run=0s execute=0s Jun 10 23:25:28.759: INFO: watch delete seen for pod-submit-status-0-8 Jun 10 23:25:28.759: INFO: Pod pod-submit-status-0-8 on node node1 timings total=10.570763787s t=1.299s run=0s execute=0s Jun 10 23:25:30.268: INFO: watch delete seen for pod-submit-status-2-6 Jun 10 23:25:30.268: INFO: Pod pod-submit-status-2-6 on node node1 timings total=2.05222568s t=208ms run=0s execute=0s Jun 10 23:25:34.069: INFO: watch delete seen for pod-submit-status-0-9 Jun 10 23:25:34.069: INFO: Pod pod-submit-status-0-9 on node node2 timings total=5.309818747s t=1.645s run=0s execute=0s Jun 10 23:25:36.875: INFO: watch delete seen for pod-submit-status-1-8 Jun 10 23:25:36.875: INFO: Pod pod-submit-status-1-8 on node node1 timings total=15.45161907s t=1.351s run=0s execute=0s Jun 10 23:25:47.049: INFO: watch delete seen for pod-submit-status-2-7 Jun 10 23:25:47.049: INFO: Pod pod-submit-status-2-7 on node node2 timings total=16.781347351s t=1.535s run=0s execute=0s Jun 10 23:25:47.058: INFO: watch delete seen for pod-submit-status-1-9 Jun 10 23:25:47.058: INFO: Pod pod-submit-status-1-9 on node node2 timings total=10.182817016s t=1.318s run=2s execute=0s Jun 10 23:25:48.778: INFO: watch delete seen for pod-submit-status-0-10 Jun 10 23:25:48.778: INFO: Pod pod-submit-status-0-10 on node node1 timings total=14.709125572s t=1.373s run=0s execute=0s Jun 10 23:25:49.258: INFO: watch delete seen for pod-submit-status-1-10 Jun 10 23:25:49.259: INFO: Pod pod-submit-status-1-10 on node node1 timings total=2.200490251s t=730ms run=0s execute=0s Jun 10 23:25:56.872: INFO: watch delete seen for pod-submit-status-2-8 Jun 10 23:25:56.872: INFO: Pod pod-submit-status-2-8 on node node1 timings total=9.822107603s t=622ms run=0s execute=0s Jun 10 23:25:56.881: INFO: watch delete seen for pod-submit-status-1-11 Jun 10 23:25:56.881: INFO: Pod pod-submit-status-1-11 on node node1 timings total=7.622323624s t=654ms run=0s execute=0s Jun 10 23:25:58.038: INFO: watch delete seen for pod-submit-status-0-11 Jun 10 23:25:58.038: INFO: Pod pod-submit-status-0-11 on node node2 timings total=9.259200611s t=82ms run=0s execute=0s Jun 10 23:26:01.239: INFO: watch delete seen for pod-submit-status-1-12 Jun 10 23:26:01.239: INFO: Pod pod-submit-status-1-12 on node node1 timings total=4.357736673s t=755ms run=0s execute=0s Jun 10 23:26:06.867: INFO: watch delete seen for pod-submit-status-0-12 Jun 10 23:26:06.867: INFO: Pod pod-submit-status-0-12 on node node1 timings total=8.829400156s t=1.75s run=0s execute=0s Jun 10 23:26:06.875: INFO: watch delete seen for pod-submit-status-2-9 Jun 10 23:26:06.875: INFO: Pod pod-submit-status-2-9 on node node1 timings total=10.003672508s t=1.402s run=0s execute=0s Jun 10 23:26:16.869: INFO: watch delete seen for pod-submit-status-1-13 Jun 10 23:26:16.870: INFO: Pod pod-submit-status-1-13 on node node1 timings total=15.630730097s t=706ms run=0s execute=0s Jun 10 23:26:17.123: INFO: watch delete seen for pod-submit-status-2-10 Jun 10 23:26:17.123: INFO: Pod pod-submit-status-2-10 on node node2 timings total=10.247476107s t=172ms run=0s execute=0s Jun 10 23:26:26.870: INFO: watch delete seen for pod-submit-status-1-14 Jun 10 23:26:26.870: INFO: Pod pod-submit-status-1-14 on node node1 timings total=10.000336728s t=1.138s run=0s execute=0s Jun 10 23:26:26.882: INFO: watch delete seen for pod-submit-status-2-11 Jun 10 23:26:26.882: INFO: Pod pod-submit-status-2-11 on node node1 timings total=9.75912524s t=64ms run=0s execute=0s Jun 10 23:26:36.880: INFO: watch delete seen for pod-submit-status-2-12 Jun 10 23:26:36.881: INFO: Pod pod-submit-status-2-12 on node node1 timings total=9.99839171s t=202ms run=0s execute=0s Jun 10 23:26:47.043: INFO: watch delete seen for pod-submit-status-2-13 Jun 10 23:26:47.043: INFO: Pod pod-submit-status-2-13 on node node2 timings total=10.161979367s t=1.452s run=0s execute=0s Jun 10 23:26:50.012: INFO: watch delete seen for pod-submit-status-2-14 Jun 10 23:26:50.012: INFO: Pod pod-submit-status-2-14 on node node2 timings total=2.969088961s t=734ms run=0s execute=0s Jun 10 23:27:04.333: INFO: watch delete seen for pod-submit-status-0-13 Jun 10 23:27:04.334: INFO: Pod pod-submit-status-0-13 on node node2 timings total=57.466343094s t=1.56s run=0s execute=0s Jun 10 23:27:17.041: INFO: watch delete seen for pod-submit-status-0-14 Jun 10 23:27:17.042: INFO: Pod pod-submit-status-0-14 on node node2 timings total=12.707896034s t=977ms run=0s execute=0s [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:27:17.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-166" for this suite. • [SLOW TEST:166.691 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:200 should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pod Container Status should never report success for a pending container","total":-1,"completed":3,"skipped":519,"failed":0} Jun 10 23:27:17.052: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:24:06.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W0610 23:24:06.277806 40 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 10 23:24:06.278: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 10 23:24:06.279: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 STEP: Creating pod liveness-6088ecd0-0f2d-4536-80de-4cb833ff562c in namespace container-probe-4981 Jun 10 23:24:22.300: INFO: Started pod liveness-6088ecd0-0f2d-4536-80de-4cb833ff562c in namespace container-probe-4981 STEP: checking the pod's current state and verifying that restartCount is present Jun 10 23:24:22.302: INFO: Initial restart count of pod liveness-6088ecd0-0f2d-4536-80de-4cb833ff562c is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:28:22.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4981" for this suite. • [SLOW TEST:256.641 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe","total":-1,"completed":1,"skipped":96,"failed":0} Jun 10 23:28:22.897: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:24:54.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:342 STEP: Creating pod startup-72c8ec1c-5cf0-417d-9440-f16765e6daf0 in namespace container-probe-7895 Jun 10 23:25:04.982: INFO: Started pod startup-72c8ec1c-5cf0-417d-9440-f16765e6daf0 in namespace container-probe-7895 STEP: checking the pod's current state and verifying that restartCount is present Jun 10 23:25:04.984: INFO: Initial restart count of pod startup-72c8ec1c-5cf0-417d-9440-f16765e6daf0 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:29:05.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7895" for this suite. • [SLOW TEST:250.636 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:342 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it","total":-1,"completed":2,"skipped":352,"failed":0} Jun 10 23:29:05.581: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:25:08.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 STEP: wait until node is ready Jun 10 23:25:08.650: INFO: Waiting up to 5m0s for node node2 condition Ready to be true STEP: wait until there is node lease STEP: verify NodeStatus report period is longer than lease duration Jun 10 23:25:09.664: INFO: node status heartbeat is unchanged for 1.002884212s, waiting for 1m20s Jun 10 23:25:10.666: INFO: node status heartbeat is unchanged for 2.004562021s, waiting for 1m20s Jun 10 23:25:11.665: INFO: node status heartbeat is unchanged for 3.003635578s, waiting for 1m20s Jun 10 23:25:12.666: INFO: node status heartbeat is unchanged for 4.00518111s, waiting for 1m20s Jun 10 23:25:13.665: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 10 23:25:13.670: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:25:02 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:25:12 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:25:02 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:25:12 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:25:02 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:25:12 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-10 20:00:31 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 10 23:25:14.665: INFO: node status heartbeat is unchanged for 1.00019226s, waiting for 1m20s Jun 10 23:25:15.665: INFO: node status heartbeat is unchanged for 2.000204105s, waiting for 1m20s Jun 10 23:25:16.667: INFO: node status heartbeat is unchanged for 3.001996108s, waiting for 1m20s Jun 10 23:25:17.665: INFO: node status heartbeat is unchanged for 4.000265494s, waiting for 1m20s Jun 10 23:25:18.667: INFO: node status heartbeat is unchanged for 5.001419044s, waiting for 1m20s Jun 10 23:25:19.666: INFO: node status heartbeat is unchanged for 6.000930518s, waiting for 1m20s Jun 10 23:25:20.667: INFO: node status heartbeat is unchanged for 7.00178816s, waiting for 1m20s Jun 10 23:25:21.665: INFO: node status heartbeat is unchanged for 7.999931678s, waiting for 1m20s Jun 10 23:25:22.666: INFO: node status heartbeat is unchanged for 9.000437841s, waiting for 1m20s Jun 10 23:25:23.665: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s Jun 10 23:25:23.669: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:25:12 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:25:23 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:25:12 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:25:23 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:25:12 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:25:23 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-10 20:00:31 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 10 23:25:24.666: INFO: node status heartbeat is unchanged for 1.001163349s, waiting for 1m20s Jun 10 23:25:25.665: INFO: node status heartbeat is unchanged for 2.000645624s, waiting for 1m20s Jun 10 23:25:26.665: INFO: node status heartbeat is unchanged for 3.000704779s, waiting for 1m20s Jun 10 23:25:27.667: INFO: node status heartbeat is unchanged for 4.002216523s, waiting for 1m20s Jun 10 23:25:28.665: INFO: node status heartbeat is unchanged for 5.000086329s, waiting for 1m20s Jun 10 23:25:29.664: INFO: node status heartbeat is unchanged for 5.999954545s, waiting for 1m20s Jun 10 23:25:30.665: INFO: node status heartbeat is unchanged for 7.000689126s, waiting for 1m20s Jun 10 23:25:31.664: INFO: node status heartbeat is unchanged for 7.999767411s, waiting for 1m20s Jun 10 23:25:32.666: INFO: node status heartbeat is unchanged for 9.001684669s, waiting for 1m20s Jun 10 23:25:33.666: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 10 23:25:33.671: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:25:23 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:25:33 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:25:23 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:25:33 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:25:23 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:25:33 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-10 20:00:31 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 10 23:25:34.666: INFO: node status heartbeat is unchanged for 999.775651ms, waiting for 1m20s Jun 10 23:25:35.666: INFO: node status heartbeat is unchanged for 1.999336924s, waiting for 1m20s Jun 10 23:25:36.668: INFO: node status heartbeat is unchanged for 3.002171642s, waiting for 1m20s Jun 10 23:25:37.667: INFO: node status heartbeat is unchanged for 4.000748873s, waiting for 1m20s Jun 10 23:25:38.666: INFO: node status heartbeat is unchanged for 5.000029919s, waiting for 1m20s Jun 10 23:25:39.666: INFO: node status heartbeat is unchanged for 6.000138003s, waiting for 1m20s Jun 10 23:25:40.667: INFO: node status heartbeat is unchanged for 7.001218932s, waiting for 1m20s Jun 10 23:25:41.666: INFO: node status heartbeat is unchanged for 7.999902117s, waiting for 1m20s Jun 10 23:25:42.666: INFO: node status heartbeat is unchanged for 9.000119913s, waiting for 1m20s Jun 10 23:25:43.665: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 10 23:25:43.670: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:25:33 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:25:43 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:25:33 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:25:43 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:25:33 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:25:43 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-10 20:00:31 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 10 23:25:44.666: INFO: node status heartbeat is unchanged for 1.000620099s, waiting for 1m20s Jun 10 23:25:45.669: INFO: node status heartbeat is unchanged for 2.003401828s, waiting for 1m20s Jun 10 23:25:46.668: INFO: node status heartbeat is unchanged for 3.002567432s, waiting for 1m20s Jun 10 23:25:47.666: INFO: node status heartbeat is unchanged for 4.000782394s, waiting for 1m20s Jun 10 23:25:48.667: INFO: node status heartbeat is unchanged for 5.002198921s, waiting for 1m20s Jun 10 23:25:49.665: INFO: node status heartbeat is unchanged for 5.999412059s, waiting for 1m20s Jun 10 23:25:50.665: INFO: node status heartbeat is unchanged for 6.99953593s, waiting for 1m20s Jun 10 23:25:51.666: INFO: node status heartbeat is unchanged for 8.001232933s, waiting for 1m20s Jun 10 23:25:52.666: INFO: node status heartbeat is unchanged for 9.000765777s, waiting for 1m20s Jun 10 23:25:53.665: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 10 23:25:53.669: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:25:43 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:25:53 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:25:43 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:25:53 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:25:43 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:25:53 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-10 20:00:31 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 10 23:25:54.665: INFO: node status heartbeat is unchanged for 1.000042064s, waiting for 1m20s Jun 10 23:25:55.665: INFO: node status heartbeat is unchanged for 2.000298829s, waiting for 1m20s Jun 10 23:25:56.667: INFO: node status heartbeat is unchanged for 3.001828505s, waiting for 1m20s Jun 10 23:25:57.666: INFO: node status heartbeat is unchanged for 4.000722185s, waiting for 1m20s Jun 10 23:25:58.667: INFO: node status heartbeat is unchanged for 5.002146785s, waiting for 1m20s Jun 10 23:25:59.666: INFO: node status heartbeat is unchanged for 6.000488687s, waiting for 1m20s Jun 10 23:26:00.666: INFO: node status heartbeat is unchanged for 7.001311306s, waiting for 1m20s Jun 10 23:26:01.666: INFO: node status heartbeat is unchanged for 8.000880825s, waiting for 1m20s Jun 10 23:26:02.665: INFO: node status heartbeat is unchanged for 8.99970245s, waiting for 1m20s Jun 10 23:26:03.666: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 10 23:26:03.670: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:25:53 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:26:03 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:25:53 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:26:03 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:25:53 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:26:03 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-10 20:00:31 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 10 23:26:04.666: INFO: node status heartbeat is unchanged for 999.917223ms, waiting for 1m20s Jun 10 23:26:05.666: INFO: node status heartbeat is unchanged for 1.999604678s, waiting for 1m20s Jun 10 23:26:06.669: INFO: node status heartbeat is unchanged for 3.002849156s, waiting for 1m20s Jun 10 23:26:07.665: INFO: node status heartbeat is unchanged for 3.999443366s, waiting for 1m20s Jun 10 23:26:08.665: INFO: node status heartbeat is unchanged for 4.998708219s, waiting for 1m20s Jun 10 23:26:09.665: INFO: node status heartbeat is unchanged for 5.998739826s, waiting for 1m20s Jun 10 23:26:10.666: INFO: node status heartbeat is unchanged for 6.999498192s, waiting for 1m20s Jun 10 23:26:11.667: INFO: node status heartbeat is unchanged for 8.000878251s, waiting for 1m20s Jun 10 23:26:12.666: INFO: node status heartbeat is unchanged for 8.999831828s, waiting for 1m20s Jun 10 23:26:13.665: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 10 23:26:13.669: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:26:03 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:26:13 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:26:03 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:26:13 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:26:03 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:26:13 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-10 20:00:31 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 10 23:26:14.665: INFO: node status heartbeat is unchanged for 1.00074891s, waiting for 1m20s Jun 10 23:26:15.666: INFO: node status heartbeat is unchanged for 2.001146457s, waiting for 1m20s Jun 10 23:26:16.667: INFO: node status heartbeat is unchanged for 3.002940635s, waiting for 1m20s Jun 10 23:26:17.667: INFO: node status heartbeat is unchanged for 4.00215535s, waiting for 1m20s Jun 10 23:26:18.667: INFO: node status heartbeat is unchanged for 5.002473535s, waiting for 1m20s Jun 10 23:26:19.666: INFO: node status heartbeat is unchanged for 6.001011196s, waiting for 1m20s Jun 10 23:26:20.666: INFO: node status heartbeat is unchanged for 7.001856035s, waiting for 1m20s Jun 10 23:26:21.665: INFO: node status heartbeat is unchanged for 8.000397354s, waiting for 1m20s Jun 10 23:26:22.667: INFO: node status heartbeat is unchanged for 9.002574316s, waiting for 1m20s Jun 10 23:26:23.666: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 10 23:26:23.671: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:26:13 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:26:23 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:26:13 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:26:23 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:26:13 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:26:23 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-10 20:00:31 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    NodeInfo: {MachineID: "bb5fb4a83f9949939cd41b7583e9b343", SystemUUID: "80B3CD56-852F-E711-906E-0017A4403562", BootID: "bd9c2046-c9ae-4b83-a147-c07e3487254e", KernelVersion: "3.10.0-1160.66.1.el7.x86_64", ...},    Images: []v1.ContainerImage{    ... // 29 identical elements    {Names: {"k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8"..., "k8s.gcr.io/e2e-test-images/nginx:1.14-1"}, SizeBytes: 16032814},    {Names: {"gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebc"..., "gcr.io/google-samples/hello-go-gke:1.0"}, SizeBytes: 11443478}, +  { +  Names: []string{ +  "k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf"..., +  "k8s.gcr.io/e2e-test-images/nonewprivs:1.3", +  }, +  SizeBytes: 7107254, +  },    {Names: {"appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172"..., "appropriate/curl:edge"}, SizeBytes: 5654234},    {Names: {"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad"..., "k8s.gcr.io/e2e-test-images/busybox:1.29-1"}, SizeBytes: 1154361},    ... // 3 identical elements    },    VolumesInUse: nil,    VolumesAttached: nil,    Config: nil,   } Jun 10 23:26:24.665: INFO: node status heartbeat is unchanged for 998.83992ms, waiting for 1m20s Jun 10 23:26:25.666: INFO: node status heartbeat is unchanged for 1.999942757s, waiting for 1m20s Jun 10 23:26:26.667: INFO: node status heartbeat is unchanged for 3.000422874s, waiting for 1m20s Jun 10 23:26:27.666: INFO: node status heartbeat is unchanged for 3.999266655s, waiting for 1m20s Jun 10 23:26:28.667: INFO: node status heartbeat is unchanged for 5.00111901s, waiting for 1m20s Jun 10 23:26:29.666: INFO: node status heartbeat is unchanged for 5.999824308s, waiting for 1m20s Jun 10 23:26:30.666: INFO: node status heartbeat is unchanged for 6.999564072s, waiting for 1m20s Jun 10 23:26:31.665: INFO: node status heartbeat is unchanged for 7.999098267s, waiting for 1m20s Jun 10 23:26:32.667: INFO: node status heartbeat is unchanged for 9.000329492s, waiting for 1m20s Jun 10 23:26:33.665: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 10 23:26:33.670: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:26:23 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:26:33 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:26:23 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:26:33 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:26:23 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:26:33 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-10 20:00:31 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 10 23:26:34.666: INFO: node status heartbeat is unchanged for 1.000820653s, waiting for 1m20s Jun 10 23:26:35.667: INFO: node status heartbeat is unchanged for 2.002175503s, waiting for 1m20s Jun 10 23:26:36.666: INFO: node status heartbeat is unchanged for 3.00066181s, waiting for 1m20s Jun 10 23:26:37.666: INFO: node status heartbeat is unchanged for 4.000769508s, waiting for 1m20s Jun 10 23:26:38.665: INFO: node status heartbeat is unchanged for 4.999427915s, waiting for 1m20s Jun 10 23:26:39.667: INFO: node status heartbeat is unchanged for 6.002013566s, waiting for 1m20s Jun 10 23:26:40.668: INFO: node status heartbeat is unchanged for 7.002759538s, waiting for 1m20s Jun 10 23:26:41.667: INFO: node status heartbeat is unchanged for 8.001556939s, waiting for 1m20s Jun 10 23:26:42.667: INFO: node status heartbeat is unchanged for 9.001491421s, waiting for 1m20s Jun 10 23:26:43.667: INFO: node status heartbeat is unchanged for 10.001641125s, waiting for 1m20s Jun 10 23:26:44.666: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 10 23:26:44.670: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:26:33 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:26:43 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:26:33 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:26:43 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:26:33 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:26:43 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-10 20:00:31 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 10 23:26:45.667: INFO: node status heartbeat is unchanged for 1.001123888s, waiting for 1m20s Jun 10 23:26:46.667: INFO: node status heartbeat is unchanged for 2.001606371s, waiting for 1m20s Jun 10 23:26:47.666: INFO: node status heartbeat is unchanged for 3.000650536s, waiting for 1m20s Jun 10 23:26:48.666: INFO: node status heartbeat is unchanged for 4.000497667s, waiting for 1m20s Jun 10 23:26:49.667: INFO: node status heartbeat is unchanged for 5.001289554s, waiting for 1m20s Jun 10 23:26:50.668: INFO: node status heartbeat is unchanged for 6.002128837s, waiting for 1m20s Jun 10 23:26:51.668: INFO: node status heartbeat is unchanged for 7.002820362s, waiting for 1m20s Jun 10 23:26:52.665: INFO: node status heartbeat is unchanged for 7.999713224s, waiting for 1m20s Jun 10 23:26:53.666: INFO: node status heartbeat is unchanged for 9.000080287s, waiting for 1m20s Jun 10 23:26:54.665: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 10 23:26:54.670: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:26:43 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:26:53 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:26:43 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:26:53 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:26:43 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:26:53 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-10 20:00:31 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 10 23:26:55.668: INFO: node status heartbeat is unchanged for 1.002865983s, waiting for 1m20s Jun 10 23:26:56.666: INFO: node status heartbeat is unchanged for 2.000573917s, waiting for 1m20s Jun 10 23:26:57.666: INFO: node status heartbeat is unchanged for 3.001516423s, waiting for 1m20s Jun 10 23:26:58.668: INFO: node status heartbeat is unchanged for 4.003247409s, waiting for 1m20s Jun 10 23:26:59.668: INFO: node status heartbeat is unchanged for 5.002938587s, waiting for 1m20s Jun 10 23:27:00.667: INFO: node status heartbeat is unchanged for 6.002433001s, waiting for 1m20s Jun 10 23:27:01.667: INFO: node status heartbeat is unchanged for 7.001834572s, waiting for 1m20s Jun 10 23:27:02.667: INFO: node status heartbeat is unchanged for 8.002209083s, waiting for 1m20s Jun 10 23:27:03.668: INFO: node status heartbeat is unchanged for 9.002549718s, waiting for 1m20s Jun 10 23:27:04.666: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 10 23:27:04.670: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:26:53 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:27:03 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:26:53 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:27:03 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:26:53 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:27:03 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-10 20:00:31 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 10 23:27:05.666: INFO: node status heartbeat is unchanged for 1.000034171s, waiting for 1m20s Jun 10 23:27:06.667: INFO: node status heartbeat is unchanged for 2.00077193s, waiting for 1m20s Jun 10 23:27:07.664: INFO: node status heartbeat is unchanged for 2.998488546s, waiting for 1m20s Jun 10 23:27:08.666: INFO: node status heartbeat is unchanged for 3.999742149s, waiting for 1m20s Jun 10 23:27:09.665: INFO: node status heartbeat is unchanged for 4.99900841s, waiting for 1m20s Jun 10 23:27:10.664: INFO: node status heartbeat is unchanged for 5.998685065s, waiting for 1m20s Jun 10 23:27:11.666: INFO: node status heartbeat is unchanged for 7.000178222s, waiting for 1m20s Jun 10 23:27:12.668: INFO: node status heartbeat is unchanged for 8.002029494s, waiting for 1m20s Jun 10 23:27:13.667: INFO: node status heartbeat is unchanged for 9.001134599s, waiting for 1m20s Jun 10 23:27:14.666: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 10 23:27:14.670: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:27:03 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:27:13 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:27:03 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:27:13 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:27:03 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:27:13 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-10 20:00:31 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 10 23:27:15.665: INFO: node status heartbeat is unchanged for 999.191114ms, waiting for 1m20s Jun 10 23:27:16.667: INFO: node status heartbeat is unchanged for 2.001080214s, waiting for 1m20s Jun 10 23:27:17.666: INFO: node status heartbeat is unchanged for 3.000189349s, waiting for 1m20s Jun 10 23:27:18.666: INFO: node status heartbeat is unchanged for 3.999784229s, waiting for 1m20s Jun 10 23:27:19.664: INFO: node status heartbeat is unchanged for 4.998608565s, waiting for 1m20s Jun 10 23:27:20.669: INFO: node status heartbeat is unchanged for 6.00298435s, waiting for 1m20s Jun 10 23:27:21.666: INFO: node status heartbeat is unchanged for 7.000524215s, waiting for 1m20s Jun 10 23:27:22.665: INFO: node status heartbeat is unchanged for 7.998960617s, waiting for 1m20s Jun 10 23:27:23.668: INFO: node status heartbeat is unchanged for 9.001959313s, waiting for 1m20s Jun 10 23:27:24.666: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 10 23:27:24.670: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:27:13 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:27:23 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:27:13 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:27:23 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:27:13 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:27:23 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-10 20:00:31 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 10 23:27:25.666: INFO: node status heartbeat is unchanged for 1.000004721s, waiting for 1m20s Jun 10 23:27:26.667: INFO: node status heartbeat is unchanged for 2.001728815s, waiting for 1m20s Jun 10 23:27:27.666: INFO: node status heartbeat is unchanged for 3.000146776s, waiting for 1m20s Jun 10 23:27:28.666: INFO: node status heartbeat is unchanged for 4.000174352s, waiting for 1m20s Jun 10 23:27:29.666: INFO: node status heartbeat is unchanged for 5.000209742s, waiting for 1m20s Jun 10 23:27:30.666: INFO: node status heartbeat is unchanged for 6.000230578s, waiting for 1m20s Jun 10 23:27:31.667: INFO: node status heartbeat is unchanged for 7.00182776s, waiting for 1m20s Jun 10 23:27:32.668: INFO: node status heartbeat is unchanged for 8.002122888s, waiting for 1m20s Jun 10 23:27:33.665: INFO: node status heartbeat is unchanged for 8.998981423s, waiting for 1m20s Jun 10 23:27:34.667: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 10 23:27:34.671: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:27:23 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:27:33 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:27:23 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:27:33 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:27:23 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:27:33 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-10 20:00:31 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 10 23:27:35.668: INFO: node status heartbeat is unchanged for 1.001421197s, waiting for 1m20s Jun 10 23:27:36.668: INFO: node status heartbeat is unchanged for 2.000886049s, waiting for 1m20s Jun 10 23:27:37.666: INFO: node status heartbeat is unchanged for 2.998764727s, waiting for 1m20s Jun 10 23:27:38.664: INFO: node status heartbeat is unchanged for 3.997462038s, waiting for 1m20s Jun 10 23:27:39.665: INFO: node status heartbeat is unchanged for 4.998418203s, waiting for 1m20s Jun 10 23:27:40.667: INFO: node status heartbeat is unchanged for 5.999746323s, waiting for 1m20s Jun 10 23:27:41.665: INFO: node status heartbeat is unchanged for 6.998376937s, waiting for 1m20s Jun 10 23:27:42.666: INFO: node status heartbeat is unchanged for 7.998666712s, waiting for 1m20s Jun 10 23:27:43.667: INFO: node status heartbeat is unchanged for 9.000479605s, waiting for 1m20s Jun 10 23:27:44.666: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 10 23:27:44.671: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:27:33 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:27:43 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:27:33 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:27:43 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:27:33 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:27:43 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-10 20:00:31 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 10 23:27:45.666: INFO: node status heartbeat is unchanged for 999.739267ms, waiting for 1m20s Jun 10 23:27:46.668: INFO: node status heartbeat is unchanged for 2.001488231s, waiting for 1m20s Jun 10 23:27:47.665: INFO: node status heartbeat is unchanged for 2.998975997s, waiting for 1m20s Jun 10 23:27:48.668: INFO: node status heartbeat is unchanged for 4.001621261s, waiting for 1m20s Jun 10 23:27:49.666: INFO: node status heartbeat is unchanged for 4.999551815s, waiting for 1m20s Jun 10 23:27:50.668: INFO: node status heartbeat is unchanged for 6.001896607s, waiting for 1m20s Jun 10 23:27:51.666: INFO: node status heartbeat is unchanged for 6.99964296s, waiting for 1m20s Jun 10 23:27:52.665: INFO: node status heartbeat is unchanged for 7.999087251s, waiting for 1m20s Jun 10 23:27:53.665: INFO: node status heartbeat is unchanged for 8.998620281s, waiting for 1m20s Jun 10 23:27:54.666: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s Jun 10 23:27:54.670: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:27:43 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:27:54 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:27:43 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:27:54 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:27:43 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:27:54 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-10 20:00:31 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 10 23:27:55.668: INFO: node status heartbeat is unchanged for 1.002004619s, waiting for 1m20s Jun 10 23:27:56.666: INFO: node status heartbeat is unchanged for 2.000731728s, waiting for 1m20s Jun 10 23:27:57.666: INFO: node status heartbeat is unchanged for 2.999951752s, waiting for 1m20s Jun 10 23:27:58.668: INFO: node status heartbeat is unchanged for 4.002504466s, waiting for 1m20s Jun 10 23:27:59.667: INFO: node status heartbeat is unchanged for 5.001280638s, waiting for 1m20s Jun 10 23:28:00.669: INFO: node status heartbeat is unchanged for 6.002968474s, waiting for 1m20s Jun 10 23:28:01.665: INFO: node status heartbeat is unchanged for 6.999287016s, waiting for 1m20s Jun 10 23:28:02.713: INFO: node status heartbeat is unchanged for 8.047253607s, waiting for 1m20s Jun 10 23:28:03.666: INFO: node status heartbeat is unchanged for 9.000618278s, waiting for 1m20s Jun 10 23:28:04.667: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 10 23:28:04.672: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:27:54 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:28:04 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:27:54 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:28:04 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:27:54 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:28:04 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-10 20:00:31 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 10 23:28:05.667: INFO: node status heartbeat is unchanged for 999.393182ms, waiting for 1m20s Jun 10 23:28:06.665: INFO: node status heartbeat is unchanged for 1.997880287s, waiting for 1m20s Jun 10 23:28:07.670: INFO: node status heartbeat is unchanged for 3.002562745s, waiting for 1m20s Jun 10 23:28:08.667: INFO: node status heartbeat is unchanged for 3.999709173s, waiting for 1m20s Jun 10 23:28:09.666: INFO: node status heartbeat is unchanged for 4.998824874s, waiting for 1m20s Jun 10 23:28:10.667: INFO: node status heartbeat is unchanged for 5.999443788s, waiting for 1m20s Jun 10 23:28:11.668: INFO: node status heartbeat is unchanged for 7.001138007s, waiting for 1m20s Jun 10 23:28:12.668: INFO: node status heartbeat is unchanged for 8.001178886s, waiting for 1m20s Jun 10 23:28:13.667: INFO: node status heartbeat is unchanged for 8.999425942s, waiting for 1m20s Jun 10 23:28:14.667: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 10 23:28:14.671: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:28:04 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:28:14 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:28:04 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:28:14 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:28:04 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:28:14 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-10 20:00:31 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 10 23:28:15.666: INFO: node status heartbeat is unchanged for 999.325091ms, waiting for 1m20s Jun 10 23:28:16.668: INFO: node status heartbeat is unchanged for 2.001253822s, waiting for 1m20s Jun 10 23:28:17.665: INFO: node status heartbeat is unchanged for 2.998645245s, waiting for 1m20s Jun 10 23:28:18.667: INFO: node status heartbeat is unchanged for 4.000363564s, waiting for 1m20s Jun 10 23:28:19.667: INFO: node status heartbeat is unchanged for 4.999814677s, waiting for 1m20s Jun 10 23:28:20.666: INFO: node status heartbeat is unchanged for 5.999090573s, waiting for 1m20s Jun 10 23:28:21.666: INFO: node status heartbeat is unchanged for 6.999183426s, waiting for 1m20s Jun 10 23:28:22.667: INFO: node status heartbeat is unchanged for 8.000129742s, waiting for 1m20s Jun 10 23:28:23.665: INFO: node status heartbeat is unchanged for 8.997930743s, waiting for 1m20s Jun 10 23:28:24.668: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 10 23:28:24.672: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:28:14 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:28:24 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:28:14 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:28:24 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:28:14 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:28:24 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-10 20:00:31 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 10 23:28:25.664: INFO: node status heartbeat is unchanged for 996.777498ms, waiting for 1m20s Jun 10 23:28:26.668: INFO: node status heartbeat is unchanged for 2.000747793s, waiting for 1m20s Jun 10 23:28:27.668: INFO: node status heartbeat is unchanged for 3.000377856s, waiting for 1m20s Jun 10 23:28:28.668: INFO: node status heartbeat is unchanged for 4.000339301s, waiting for 1m20s Jun 10 23:28:29.666: INFO: node status heartbeat is unchanged for 4.998331301s, waiting for 1m20s Jun 10 23:28:30.666: INFO: node status heartbeat is unchanged for 5.997933333s, waiting for 1m20s Jun 10 23:28:31.665: INFO: node status heartbeat is unchanged for 6.99753497s, waiting for 1m20s Jun 10 23:28:32.668: INFO: node status heartbeat is unchanged for 8.000426161s, waiting for 1m20s Jun 10 23:28:33.666: INFO: node status heartbeat is unchanged for 8.998559611s, waiting for 1m20s Jun 10 23:28:34.666: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 10 23:28:34.671: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:28:24 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:28:34 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:28:24 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:28:34 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:28:24 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:28:34 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-10 20:00:31 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 10 23:28:35.666: INFO: node status heartbeat is unchanged for 1.000732011s, waiting for 1m20s Jun 10 23:28:36.668: INFO: node status heartbeat is unchanged for 2.00205973s, waiting for 1m20s Jun 10 23:28:37.666: INFO: node status heartbeat is unchanged for 3.000074528s, waiting for 1m20s Jun 10 23:28:38.668: INFO: node status heartbeat is unchanged for 4.002429353s, waiting for 1m20s Jun 10 23:28:39.666: INFO: node status heartbeat is unchanged for 5.000007118s, waiting for 1m20s Jun 10 23:28:40.668: INFO: node status heartbeat is unchanged for 6.002120665s, waiting for 1m20s Jun 10 23:28:41.666: INFO: node status heartbeat is unchanged for 7.000325914s, waiting for 1m20s Jun 10 23:28:42.666: INFO: node status heartbeat is unchanged for 8.000678255s, waiting for 1m20s Jun 10 23:28:43.667: INFO: node status heartbeat is unchanged for 9.001404267s, waiting for 1m20s Jun 10 23:28:44.666: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 10 23:28:44.671: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:28:34 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:28:44 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:28:34 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:28:44 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:28:34 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:28:44 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-10 20:00:31 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 10 23:28:45.667: INFO: node status heartbeat is unchanged for 1.001039694s, waiting for 1m20s Jun 10 23:28:46.667: INFO: node status heartbeat is unchanged for 2.000420893s, waiting for 1m20s Jun 10 23:28:47.668: INFO: node status heartbeat is unchanged for 3.001237311s, waiting for 1m20s Jun 10 23:28:48.667: INFO: node status heartbeat is unchanged for 4.001177203s, waiting for 1m20s Jun 10 23:28:49.665: INFO: node status heartbeat is unchanged for 4.998460865s, waiting for 1m20s Jun 10 23:28:50.668: INFO: node status heartbeat is unchanged for 6.00186812s, waiting for 1m20s Jun 10 23:28:51.667: INFO: node status heartbeat is unchanged for 7.00113329s, waiting for 1m20s Jun 10 23:28:52.669: INFO: node status heartbeat is unchanged for 8.002274049s, waiting for 1m20s Jun 10 23:28:53.665: INFO: node status heartbeat is unchanged for 8.998973238s, waiting for 1m20s Jun 10 23:28:54.665: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 10 23:28:54.670: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:28:44 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:28:54 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:28:44 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:28:54 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:28:44 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:28:54 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-10 20:00:31 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 10 23:28:55.665: INFO: node status heartbeat is unchanged for 1.00015256s, waiting for 1m20s Jun 10 23:28:56.666: INFO: node status heartbeat is unchanged for 2.000868843s, waiting for 1m20s Jun 10 23:28:57.665: INFO: node status heartbeat is unchanged for 2.999890767s, waiting for 1m20s Jun 10 23:28:58.666: INFO: node status heartbeat is unchanged for 4.001036009s, waiting for 1m20s Jun 10 23:28:59.666: INFO: node status heartbeat is unchanged for 5.001119431s, waiting for 1m20s Jun 10 23:29:00.666: INFO: node status heartbeat is unchanged for 6.001029158s, waiting for 1m20s Jun 10 23:29:01.666: INFO: node status heartbeat is unchanged for 7.00119647s, waiting for 1m20s Jun 10 23:29:02.666: INFO: node status heartbeat is unchanged for 8.000935646s, waiting for 1m20s Jun 10 23:29:03.665: INFO: node status heartbeat is unchanged for 8.999677419s, waiting for 1m20s Jun 10 23:29:04.667: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 10 23:29:04.671: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:28:54 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:29:04 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:28:54 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:29:04 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:28:54 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:29:04 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-10 20:00:31 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 10 23:29:05.664: INFO: node status heartbeat is unchanged for 997.696023ms, waiting for 1m20s Jun 10 23:29:06.665: INFO: node status heartbeat is unchanged for 1.998113815s, waiting for 1m20s Jun 10 23:29:07.666: INFO: node status heartbeat is unchanged for 2.999082825s, waiting for 1m20s Jun 10 23:29:08.669: INFO: node status heartbeat is unchanged for 4.002796733s, waiting for 1m20s Jun 10 23:29:09.666: INFO: node status heartbeat is unchanged for 4.999731393s, waiting for 1m20s Jun 10 23:29:10.668: INFO: node status heartbeat is unchanged for 6.00129876s, waiting for 1m20s Jun 10 23:29:11.669: INFO: node status heartbeat is unchanged for 7.002016322s, waiting for 1m20s Jun 10 23:29:12.665: INFO: node status heartbeat is unchanged for 7.998796581s, waiting for 1m20s Jun 10 23:29:13.666: INFO: node status heartbeat is unchanged for 8.999630145s, waiting for 1m20s Jun 10 23:29:14.665: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 10 23:29:14.670: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:29:04 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:29:14 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:29:04 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:29:14 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:29:04 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:29:14 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-10 20:00:31 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 10 23:29:15.669: INFO: node status heartbeat is unchanged for 1.003174246s, waiting for 1m20s Jun 10 23:29:16.668: INFO: node status heartbeat is unchanged for 2.002157455s, waiting for 1m20s Jun 10 23:29:17.666: INFO: node status heartbeat is unchanged for 3.00056795s, waiting for 1m20s Jun 10 23:29:18.667: INFO: node status heartbeat is unchanged for 4.001661387s, waiting for 1m20s Jun 10 23:29:19.666: INFO: node status heartbeat is unchanged for 5.000468577s, waiting for 1m20s Jun 10 23:29:20.668: INFO: node status heartbeat is unchanged for 6.002418877s, waiting for 1m20s Jun 10 23:29:21.666: INFO: node status heartbeat is unchanged for 7.000900773s, waiting for 1m20s Jun 10 23:29:22.668: INFO: node status heartbeat is unchanged for 8.002736448s, waiting for 1m20s Jun 10 23:29:23.667: INFO: node status heartbeat is unchanged for 9.001361728s, waiting for 1m20s Jun 10 23:29:24.668: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 10 23:29:24.672: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:29:14 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:29:24 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:29:14 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:29:24 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:29:14 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:29:24 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-10 20:00:31 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 10 23:29:25.668: INFO: node status heartbeat is unchanged for 999.78757ms, waiting for 1m20s Jun 10 23:29:26.668: INFO: node status heartbeat is unchanged for 1.999898345s, waiting for 1m20s Jun 10 23:29:27.667: INFO: node status heartbeat is unchanged for 2.998951131s, waiting for 1m20s Jun 10 23:29:28.667: INFO: node status heartbeat is unchanged for 3.99971044s, waiting for 1m20s Jun 10 23:29:29.666: INFO: node status heartbeat is unchanged for 4.99849688s, waiting for 1m20s Jun 10 23:29:30.668: INFO: node status heartbeat is unchanged for 6.000140167s, waiting for 1m20s Jun 10 23:29:31.666: INFO: node status heartbeat is unchanged for 6.998329142s, waiting for 1m20s Jun 10 23:29:32.666: INFO: node status heartbeat is unchanged for 7.998508956s, waiting for 1m20s Jun 10 23:29:33.665: INFO: node status heartbeat is unchanged for 8.99761273s, waiting for 1m20s Jun 10 23:29:34.665: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 10 23:29:34.669: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:29:24 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:29:34 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:29:24 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:29:34 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:29:24 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:29:34 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-10 20:00:31 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 10 23:29:35.665: INFO: node status heartbeat is unchanged for 1.000383108s, waiting for 1m20s Jun 10 23:29:36.664: INFO: node status heartbeat is unchanged for 1.999390811s, waiting for 1m20s Jun 10 23:29:37.666: INFO: node status heartbeat is unchanged for 3.00071916s, waiting for 1m20s Jun 10 23:29:38.667: INFO: node status heartbeat is unchanged for 4.001522542s, waiting for 1m20s Jun 10 23:29:39.666: INFO: node status heartbeat is unchanged for 5.000759724s, waiting for 1m20s Jun 10 23:29:40.667: INFO: node status heartbeat is unchanged for 6.002021374s, waiting for 1m20s Jun 10 23:29:41.664: INFO: node status heartbeat is unchanged for 6.999375992s, waiting for 1m20s Jun 10 23:29:42.669: INFO: node status heartbeat is unchanged for 8.003651053s, waiting for 1m20s Jun 10 23:29:43.666: INFO: node status heartbeat is unchanged for 9.001004379s, waiting for 1m20s Jun 10 23:29:44.666: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 10 23:29:44.670: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:29:34 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:29:44 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:29:34 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:29:44 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:29:34 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:29:44 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-10 20:00:31 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 10 23:29:45.665: INFO: node status heartbeat is unchanged for 999.278907ms, waiting for 1m20s Jun 10 23:29:46.665: INFO: node status heartbeat is unchanged for 1.999789116s, waiting for 1m20s Jun 10 23:29:47.667: INFO: node status heartbeat is unchanged for 3.001795228s, waiting for 1m20s Jun 10 23:29:48.668: INFO: node status heartbeat is unchanged for 4.001972895s, waiting for 1m20s Jun 10 23:29:49.666: INFO: node status heartbeat is unchanged for 5.000341382s, waiting for 1m20s Jun 10 23:29:50.668: INFO: node status heartbeat is unchanged for 6.002326589s, waiting for 1m20s Jun 10 23:29:51.667: INFO: node status heartbeat is unchanged for 7.000890645s, waiting for 1m20s Jun 10 23:29:52.667: INFO: node status heartbeat is unchanged for 8.000938794s, waiting for 1m20s Jun 10 23:29:53.666: INFO: node status heartbeat is unchanged for 9.000558847s, waiting for 1m20s Jun 10 23:29:54.665: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 10 23:29:54.669: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:29:44 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:29:54 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:29:44 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:29:54 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:29:44 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:29:54 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-10 20:00:31 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 10 23:29:55.668: INFO: node status heartbeat is unchanged for 1.002686336s, waiting for 1m20s Jun 10 23:29:56.668: INFO: node status heartbeat is unchanged for 2.003220761s, waiting for 1m20s Jun 10 23:29:57.666: INFO: node status heartbeat is unchanged for 3.001057052s, waiting for 1m20s Jun 10 23:29:58.667: INFO: node status heartbeat is unchanged for 4.001875397s, waiting for 1m20s Jun 10 23:29:59.667: INFO: node status heartbeat is unchanged for 5.001764463s, waiting for 1m20s Jun 10 23:30:00.668: INFO: node status heartbeat is unchanged for 6.003417891s, waiting for 1m20s Jun 10 23:30:01.667: INFO: node status heartbeat is unchanged for 7.001899719s, waiting for 1m20s Jun 10 23:30:02.668: INFO: node status heartbeat is unchanged for 8.003171045s, waiting for 1m20s Jun 10 23:30:03.665: INFO: node status heartbeat is unchanged for 8.99987465s, waiting for 1m20s Jun 10 23:30:04.666: INFO: node status heartbeat is unchanged for 10.000842325s, waiting for 1m20s Jun 10 23:30:05.667: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 10 23:30:05.671: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-10 20:03:16 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:29:54 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:30:04 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:29:54 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:30:04 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:29:54 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-10 23:30:04 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-10 19:59:19 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-10 20:00:31 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 10 23:30:06.665: INFO: node status heartbeat is unchanged for 998.766804ms, waiting for 1m20s Jun 10 23:30:07.665: INFO: node status heartbeat is unchanged for 1.998824153s, waiting for 1m20s Jun 10 23:30:08.666: INFO: node status heartbeat is unchanged for 2.999404141s, waiting for 1m20s Jun 10 23:30:08.669: INFO: node status heartbeat is unchanged for 3.002624902s, waiting for 1m20s STEP: verify node is still in ready status even though node status report is infrequent [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:30:08.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-768" for this suite. • [SLOW TEST:300.062 seconds] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when the NodeLease feature is enabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:49 the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:26:08.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:681 Jun 10 23:26:08.465: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:26:10.469: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:26:12.471: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:26:14.468: INFO: The status of Pod pod-back-off-image is Running (Ready = true) STEP: getting restart delay-0 Jun 10 23:27:59.682: INFO: getRestartDelay: restartCount = 4, finishedAt=2022-06-10 23:27:14 +0000 UTC restartedAt=2022-06-10 23:27:58 +0000 UTC (44s) STEP: getting restart delay-1 Jun 10 23:28:04.718: INFO: Container's last state is not "Terminated". Jun 10 23:28:05.722: INFO: Container's last state is not "Terminated". Jun 10 23:28:06.725: INFO: Container's last state is not "Terminated". Jun 10 23:28:07.727: INFO: Container's last state is not "Terminated". Jun 10 23:28:08.733: INFO: Container's last state is not "Terminated". Jun 10 23:28:09.738: INFO: Container's last state is not "Terminated". Jun 10 23:28:10.744: INFO: Container's last state is not "Terminated". Jun 10 23:28:11.750: INFO: Container's last state is not "Terminated". Jun 10 23:28:12.755: INFO: Container's last state is not "Terminated". Jun 10 23:28:13.759: INFO: Container's last state is not "Terminated". Jun 10 23:28:14.764: INFO: Container's last state is not "Terminated". Jun 10 23:28:15.769: INFO: Container's last state is not "Terminated". Jun 10 23:28:16.775: INFO: Container's last state is not "Terminated". Jun 10 23:28:17.780: INFO: Container's last state is not "Terminated". Jun 10 23:28:18.786: INFO: Container's last state is not "Terminated". Jun 10 23:28:19.791: INFO: Container's last state is not "Terminated". Jun 10 23:29:32.115: INFO: getRestartDelay: restartCount = 5, finishedAt=2022-06-10 23:28:03 +0000 UTC restartedAt=2022-06-10 23:29:30 +0000 UTC (1m27s) STEP: getting restart delay-2 Jun 10 23:32:29.970: INFO: getRestartDelay: restartCount = 6, finishedAt=2022-06-10 23:29:35 +0000 UTC restartedAt=2022-06-10 23:32:28 +0000 UTC (2m53s) STEP: updating the image Jun 10 23:32:30.482: INFO: Successfully updated pod "pod-back-off-image" STEP: get restart delay after image update Jun 10 23:32:54.559: INFO: getRestartDelay: restartCount = 8, finishedAt=2022-06-10 23:32:39 +0000 UTC restartedAt=2022-06-10 23:32:52 +0000 UTC (13s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:32:54.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8095" for this suite. • [SLOW TEST:406.143 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:681 ------------------------------ {"msg":"PASSED [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]","total":-1,"completed":12,"skipped":1064,"failed":0} Jun 10 23:32:54.571: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:25:34.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:722 Jun 10 23:25:34.239: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:25:36.243: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:25:38.245: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:25:40.244: INFO: The status of Pod back-off-cap is Running (Ready = true) STEP: getting restart delay when capped Jun 10 23:37:12.702: INFO: getRestartDelay: restartCount = 7, finishedAt=2022-06-10 23:32:07 +0000 UTC restartedAt=2022-06-10 23:37:11 +0000 UTC (5m4s) Jun 10 23:42:29.087: INFO: getRestartDelay: restartCount = 8, finishedAt=2022-06-10 23:37:16 +0000 UTC restartedAt=2022-06-10 23:42:27 +0000 UTC (5m11s) Jun 10 23:47:36.478: INFO: getRestartDelay: restartCount = 9, finishedAt=2022-06-10 23:42:32 +0000 UTC restartedAt=2022-06-10 23:47:35 +0000 UTC (5m3s) STEP: getting restart delay after a capped delay Jun 10 23:52:42.833: INFO: getRestartDelay: restartCount = 10, finishedAt=2022-06-10 23:47:40 +0000 UTC restartedAt=2022-06-10 23:52:41 +0000 UTC (5m1s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:52:42.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2076" for this suite. • [SLOW TEST:1628.640 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:722 ------------------------------ {"msg":"PASSED [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]","total":-1,"completed":9,"skipped":780,"failed":0} Jun 10 23:52:42.847: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently","total":-1,"completed":2,"skipped":192,"failed":0} Jun 10 23:30:08.700: INFO: Running AfterSuite actions on all nodes Jun 10 23:52:42.929: INFO: Running AfterSuite actions on node 1 Jun 10 23:52:42.930: INFO: Skipping dumping logs from cluster Summarizing 1 Failure: [Panic!] [sig-node] Probing container [It] should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 Ran 53 of 5773 Specs in 1717.175 seconds FAIL! -- 52 Passed | 1 Failed | 0 Pending | 5720 Skipped Ginkgo ran 1 suite in 28m38.836929762s Test Suite Failed