Running Suite: Kubernetes e2e suite =================================== Random Seed: 1655508218 - Will randomize all specs Will run 5773 specs Running in parallel across 10 nodes Jun 17 23:23:40.045: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:23:40.050: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 17 23:23:40.077: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 17 23:23:40.131: INFO: The status of Pod cmk-init-discover-node1-bvmrv is Succeeded, skipping waiting Jun 17 23:23:40.131: INFO: The status of Pod cmk-init-discover-node2-z2vgz is Succeeded, skipping waiting Jun 17 23:23:40.131: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 17 23:23:40.131: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Jun 17 23:23:40.131: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 17 23:23:40.154: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Jun 17 23:23:40.154: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Jun 17 23:23:40.154: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Jun 17 23:23:40.154: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Jun 17 23:23:40.154: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Jun 17 23:23:40.154: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Jun 17 23:23:40.154: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Jun 17 23:23:40.154: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 17 23:23:40.154: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Jun 17 23:23:40.154: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Jun 17 23:23:40.154: INFO: e2e test version: v1.21.9 Jun 17 23:23:40.155: INFO: kube-apiserver version: v1.21.1 Jun 17 23:23:40.155: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:23:40.161: INFO: Cluster IP family: ipv4 Jun 17 23:23:40.160: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:23:40.177: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSS ------------------------------ Jun 17 23:23:40.169: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:23:40.193: INFO: Cluster IP family: ipv4 SSSSSSSSS ------------------------------ Jun 17 23:23:40.182: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:23:40.202: INFO: Cluster IP family: ipv4 S ------------------------------ Jun 17 23:23:40.181: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:23:40.202: INFO: Cluster IP family: ipv4 SSS ------------------------------ Jun 17 23:23:40.186: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:23:40.206: INFO: Cluster IP family: ipv4 SSSSSS ------------------------------ Jun 17 23:23:40.185: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:23:40.209: INFO: Cluster IP family: ipv4 SS ------------------------------ Jun 17 23:23:40.186: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:23:40.209: INFO: Cluster IP family: ipv4 SSSSSSSSSSSS ------------------------------ Jun 17 23:23:40.193: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:23:40.214: INFO: Cluster IP family: ipv4 SSSSSSSS ------------------------------ Jun 17 23:23:40.198: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:23:40.218: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:23:40.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap W0617 23:23:40.697709 39 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 17 23:23:40.698: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 17 23:23:40.699: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should update ConfigMap successfully /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/configmap.go:140 STEP: Creating ConfigMap configmap-1418/configmap-test-d00cfa35-7cd4-4853-830b-c4a1d992eea7 STEP: Updating configMap configmap-1418/configmap-test-d00cfa35-7cd4-4853-830b-c4a1d992eea7 STEP: Verifying update of ConfigMap configmap-1418/configmap-test-d00cfa35-7cd4-4853-830b-c4a1d992eea7 [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:23:40.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1418" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":1,"skipped":178,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:23:40.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 Jun 17 23:23:41.017: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:23:41.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-8034" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 can disable an AppArmor profile, using unconfined [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:47 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:284 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:23:41.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-problem-detector STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:52 Jun 17 23:23:41.126: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:23:41.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-problem-detector-4840" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.039 seconds] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should run without error [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:60 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:53 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:23:40.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples W0617 23:23:40.290008 29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 17 23:23:40.290: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 17 23:23:40.292: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Jun 17 23:23:40.302: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that prints his name and namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:134 STEP: creating the pod Jun 17 23:23:40.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-2989 create -f -' Jun 17 23:23:40.866: INFO: stderr: "" Jun 17 23:23:40.866: INFO: stdout: "pod/dapi-test-pod created\n" STEP: checking if name and namespace were passed correctly Jun 17 23:23:46.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-2989 logs dapi-test-pod test-container' Jun 17 23:23:47.612: INFO: stderr: "" Jun 17 23:23:47.612: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.233.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-2989\nMY_POD_IP=10.244.4.113\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.207\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" Jun 17 23:23:47.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-2989 logs dapi-test-pod test-container' Jun 17 23:23:47.832: INFO: stderr: "" Jun 17 23:23:47.832: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.233.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-2989\nMY_POD_IP=10.244.4.113\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.207\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:23:47.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-2989" for this suite. • [SLOW TEST:7.575 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:133 should create a pod that prints his name and namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:134 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Downward API should create a pod that prints his name and namespace","total":-1,"completed":1,"skipped":29,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:23:47.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] should have OwnerReferences set /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:88 [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:23:47.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-5152" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":2,"skipped":81,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:23:40.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test W0617 23:23:40.216128 36 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 17 23:23:40.216: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 17 23:23:40.219: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 Jun 17 23:23:40.235: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-7d7ee13d-e434-4497-be2e-9cc686860d87" in namespace "security-context-test-5164" to be "Succeeded or Failed" Jun 17 23:23:40.239: INFO: Pod "busybox-readonly-true-7d7ee13d-e434-4497-be2e-9cc686860d87": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038724ms Jun 17 23:23:42.243: INFO: Pod "busybox-readonly-true-7d7ee13d-e434-4497-be2e-9cc686860d87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008514434s Jun 17 23:23:44.247: INFO: Pod "busybox-readonly-true-7d7ee13d-e434-4497-be2e-9cc686860d87": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012530194s Jun 17 23:23:46.253: INFO: Pod "busybox-readonly-true-7d7ee13d-e434-4497-be2e-9cc686860d87": Phase="Pending", Reason="", readiness=false. Elapsed: 6.018356707s Jun 17 23:23:48.258: INFO: Pod "busybox-readonly-true-7d7ee13d-e434-4497-be2e-9cc686860d87": Phase="Failed", Reason="", readiness=false. Elapsed: 8.023693206s Jun 17 23:23:48.258: INFO: Pod "busybox-readonly-true-7d7ee13d-e434-4497-be2e-9cc686860d87" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:23:48.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5164" for this suite. • [SLOW TEST:8.082 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with readOnlyRootFilesystem /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171 should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":12,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:23:40.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime W0617 23:23:40.335660 30 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 17 23:23:40.335: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 17 23:23:40.337: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:23:48.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5120" for this suite. • [SLOW TEST:8.086 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":1,"skipped":28,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:23:40.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test W0617 23:23:40.276264 37 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 17 23:23:40.276: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 17 23:23:40.278: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 Jun 17 23:23:40.292: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-426" to be "Succeeded or Failed" Jun 17 23:23:40.294: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134758ms Jun 17 23:23:42.301: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008795104s Jun 17 23:23:44.306: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013770882s Jun 17 23:23:46.311: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019494954s Jun 17 23:23:48.315: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 8.022969434s Jun 17 23:23:50.318: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 10.026483207s Jun 17 23:23:52.323: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.030570253s Jun 17 23:23:52.323: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:23:52.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-426" for this suite. • [SLOW TEST:12.086 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":1,"skipped":11,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:23:40.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test W0617 23:23:40.394981 41 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 17 23:23:40.395: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 17 23:23:40.397: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 Jun 17 23:23:40.409: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-1de8fca6-488c-44c7-843e-331363e6eed9" in namespace "security-context-test-4739" to be "Succeeded or Failed" Jun 17 23:23:40.411: INFO: Pod "busybox-privileged-true-1de8fca6-488c-44c7-843e-331363e6eed9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.399348ms Jun 17 23:23:42.416: INFO: Pod "busybox-privileged-true-1de8fca6-488c-44c7-843e-331363e6eed9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006684399s Jun 17 23:23:44.422: INFO: Pod "busybox-privileged-true-1de8fca6-488c-44c7-843e-331363e6eed9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012853525s Jun 17 23:23:46.430: INFO: Pod "busybox-privileged-true-1de8fca6-488c-44c7-843e-331363e6eed9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.020689339s Jun 17 23:23:48.435: INFO: Pod "busybox-privileged-true-1de8fca6-488c-44c7-843e-331363e6eed9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.02602683s Jun 17 23:23:50.439: INFO: Pod "busybox-privileged-true-1de8fca6-488c-44c7-843e-331363e6eed9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.029716671s Jun 17 23:23:52.441: INFO: Pod "busybox-privileged-true-1de8fca6-488c-44c7-843e-331363e6eed9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.032418057s Jun 17 23:23:52.441: INFO: Pod "busybox-privileged-true-1de8fca6-488c-44c7-843e-331363e6eed9" satisfied condition "Succeeded or Failed" Jun 17 23:23:52.447: INFO: Got logs for pod "busybox-privileged-true-1de8fca6-488c-44c7-843e-331363e6eed9": "" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:23:52.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4739" for this suite. • [SLOW TEST:12.083 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with privileged /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232 should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":1,"skipped":47,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:23:40.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W0617 23:23:40.329695 26 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 17 23:23:40.329: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 17 23:23:40.331: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:400 E0617 23:23:48.353735 26 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 239 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x654af00, 0x9c066c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86 panic(0x654af00, 0x9c066c0) /usr/local/go/src/runtime/panic.go:965 +0x1b9 k8s.io/kubernetes/test/e2e/framework/pod.podContainerStarted.func1(0xc000bf6f0c, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/resource.go:334 +0x17b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc001354340, 0xc000bf6f00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitFor(0xc000d099c8, 0xc001354340, 0xc0010c4240, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:541 +0x128 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollInternal(0xc000d099c8, 0xc001354340, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:427 +0x87 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc000d099c8, 0xc001354340, 0xc000d099c8, 0xc001354340) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:452 +0x74 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc001354340, 0x14, 0xc003840ff0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodContainerStarted(0x77b33d8, 0xc003bb7600, 0xc000d09620, 0x14, 0xc003840ff0, 0x2c, 0x0, 0x45d964b800, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/wait.go:554 +0x92 k8s.io/kubernetes/test/e2e/common/node.glob..func2.18() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:417 +0x39e k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc001380420, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc001380420, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc00138a3a0, 0x76a2fe0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc002308690, 0x0, 0x76a2fe0, 0xc0001f0800) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:215 +0x72f k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc002308690, 0x76a2fe0, 0xc0001f0800) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc003684000, 0xc002308690, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc003684000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x147 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc003684000, 0xc002bfc030) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0001ce230, 0x7f4519259e30, 0xc001539380, 0x6f170c8, 0x14, 0xc002e1cf60, 0x3, 0x3, 0x7759478, 0xc0001f0800, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x76a80c0, 0xc001539380, 0x6f170c8, 0x14, 0xc002e1afc0, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x218 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x76a80c0, 0xc001539380, 0x6f170c8, 0x14, 0xc00170d5a0, 0x2, 0x2, 0x25) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001539380) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001539380) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001539380, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "container-probe-5958". STEP: Found 3 events. Jun 17 23:23:48.357: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for startup-f20264cd-adbe-45d8-8ca3-e39ab6e9774a: { } Scheduled: Successfully assigned container-probe-5958/startup-f20264cd-adbe-45d8-8ca3-e39ab6e9774a to node2 Jun 17 23:23:48.357: INFO: At 2022-06-17 23:23:47 +0000 UTC - event for startup-f20264cd-adbe-45d8-8ca3-e39ab6e9774a: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" Jun 17 23:23:48.357: INFO: At 2022-06-17 23:23:48 +0000 UTC - event for startup-f20264cd-adbe-45d8-8ca3-e39ab6e9774a: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" in 320.291008ms Jun 17 23:23:48.359: INFO: POD NODE PHASE GRACE CONDITIONS Jun 17 23:23:48.359: INFO: startup-f20264cd-adbe-45d8-8ca3-e39ab6e9774a node2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 23:23:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 23:23:40 +0000 UTC ContainersNotReady containers with unready status: [busybox]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 23:23:40 +0000 UTC ContainersNotReady containers with unready status: [busybox]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 23:23:40 +0000 UTC }] Jun 17 23:23:48.359: INFO: Jun 17 23:23:48.364: INFO: Logging node info for node master1 Jun 17 23:23:48.366: INFO: Node Info: &Node{ObjectMeta:{master1 47691bb2-4ee9-4386-8bec-0f9db1917afd 76424 0 2022-06-17 19:59:00 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-17 19:59:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-17 20:01:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-17 20:01:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-17 20:06:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:36 +0000 UTC,LastTransitionTime:2022-06-17 20:04:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 23:23:44 +0000 UTC,LastTransitionTime:2022-06-17 19:58:57 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 23:23:44 +0000 UTC,LastTransitionTime:2022-06-17 19:58:57 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 23:23:44 +0000 UTC,LastTransitionTime:2022-06-17 19:58:57 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 23:23:44 +0000 UTC,LastTransitionTime:2022-06-17 20:01:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f59e69c8e0cc41ff966b02f015e9cf30,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:81e1dc93-cb0d-4bf9-b7c4-28e0b4aef603,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:a226a9c613b9eeed89115dd78ba697306e50d1b4466033c8415371714720c861 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 17 23:23:48.367: INFO: Logging kubelet events for node master1 Jun 17 23:23:48.369: INFO: Logging pods the kubelet thinks is on node master1 Jun 17 23:23:48.397: INFO: kube-scheduler-master1 started at 2022-06-17 20:08:08 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:48.397: INFO: Container kube-scheduler ready: true, restart count 0 Jun 17 23:23:48.397: INFO: kube-proxy-b2xlr started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:48.397: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 23:23:48.397: INFO: container-registry-65d7c44b96-hq7rp started at 2022-06-17 20:06:17 +0000 UTC (0+2 container statuses recorded) Jun 17 23:23:48.397: INFO: Container docker-registry ready: true, restart count 0 Jun 17 23:23:48.397: INFO: Container nginx ready: true, restart count 0 Jun 17 23:23:48.397: INFO: node-exporter-bts5h started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 17 23:23:48.397: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 23:23:48.397: INFO: Container node-exporter ready: true, restart count 0 Jun 17 23:23:48.397: INFO: kube-apiserver-master1 started at 2022-06-17 20:00:04 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:48.397: INFO: Container kube-apiserver ready: true, restart count 0 Jun 17 23:23:48.397: INFO: kube-controller-manager-master1 started at 2022-06-17 20:08:08 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:48.397: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 17 23:23:48.397: INFO: kube-flannel-z9nqz started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 17 23:23:48.397: INFO: Init container install-cni ready: true, restart count 2 Jun 17 23:23:48.397: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 23:23:48.397: INFO: kube-multus-ds-amd64-rqb4r started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:48.397: INFO: Container kube-multus ready: true, restart count 1 Jun 17 23:23:48.476: INFO: Latency metrics for node master1 Jun 17 23:23:48.476: INFO: Logging node info for node master2 Jun 17 23:23:48.479: INFO: Node Info: &Node{ObjectMeta:{master2 71ab7827-6f85-4ecf-82ce-5b27d8ba1a11 76396 0 2022-06-17 19:59:29 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-17 19:59:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-17 20:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-17 20:01:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2022-06-17 20:09:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2022-06-17 20:09:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:35 +0000 UTC,LastTransitionTime:2022-06-17 20:04:35 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 23:23:41 +0000 UTC,LastTransitionTime:2022-06-17 19:59:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 23:23:41 +0000 UTC,LastTransitionTime:2022-06-17 19:59:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 23:23:41 +0000 UTC,LastTransitionTime:2022-06-17 19:59:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 23:23:41 +0000 UTC,LastTransitionTime:2022-06-17 20:01:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ba0363db4fd2476098c500989c8b1fd5,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:cafb2298-e9e8-4bc9-82ab-0feb6c416066,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 17 23:23:48.480: INFO: Logging kubelet events for node master2 Jun 17 23:23:48.482: INFO: Logging pods the kubelet thinks is on node master2 Jun 17 23:23:48.493: INFO: kube-controller-manager-master2 started at 2022-06-17 20:08:05 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:48.493: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 17 23:23:48.493: INFO: kube-scheduler-master2 started at 2022-06-17 20:00:05 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:48.493: INFO: Container kube-scheduler ready: true, restart count 2 Jun 17 23:23:48.493: INFO: kube-flannel-kmc7f started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 17 23:23:48.493: INFO: Init container install-cni ready: true, restart count 2 Jun 17 23:23:48.493: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 23:23:48.493: INFO: node-feature-discovery-controller-cff799f9f-zlzkd started at 2022-06-17 20:09:28 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:48.493: INFO: Container nfd-controller ready: true, restart count 0 Jun 17 23:23:48.493: INFO: node-exporter-ccmb2 started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 17 23:23:48.493: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 23:23:48.493: INFO: Container node-exporter ready: true, restart count 0 Jun 17 23:23:48.493: INFO: kube-apiserver-master2 started at 2022-06-17 20:00:05 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:48.493: INFO: Container kube-apiserver ready: true, restart count 0 Jun 17 23:23:48.493: INFO: kube-proxy-52p78 started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:48.493: INFO: Container kube-proxy ready: true, restart count 1 Jun 17 23:23:48.493: INFO: kube-multus-ds-amd64-spg7h started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:48.493: INFO: Container kube-multus ready: true, restart count 1 Jun 17 23:23:48.493: INFO: coredns-8474476ff8-55pd7 started at 2022-06-17 20:02:14 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:48.493: INFO: Container coredns ready: true, restart count 1 Jun 17 23:23:48.493: INFO: dns-autoscaler-7df78bfcfb-ml447 started at 2022-06-17 20:02:16 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:48.493: INFO: Container autoscaler ready: true, restart count 1 Jun 17 23:23:48.587: INFO: Latency metrics for node master2 Jun 17 23:23:48.587: INFO: Logging node info for node master3 Jun 17 23:23:48.589: INFO: Node Info: &Node{ObjectMeta:{master3 4495d2b3-3dc7-45fa-93e4-2ad5ef91730e 76379 0 2022-06-17 19:59:37 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-17 19:59:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-06-17 20:00:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-06-17 20:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-06-17 20:12:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:33 +0000 UTC,LastTransitionTime:2022-06-17 20:04:33 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 23:23:41 +0000 UTC,LastTransitionTime:2022-06-17 19:59:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 23:23:41 +0000 UTC,LastTransitionTime:2022-06-17 19:59:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 23:23:41 +0000 UTC,LastTransitionTime:2022-06-17 19:59:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 23:23:41 +0000 UTC,LastTransitionTime:2022-06-17 20:01:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e420146228b341cbbaf470c338ef023e,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:88e9c5d2-4324-4e63-8acf-ee80e9511e70,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 17 23:23:48.590: INFO: Logging kubelet events for node master3 Jun 17 23:23:48.591: INFO: Logging pods the kubelet thinks is on node master3 Jun 17 23:23:48.603: INFO: kube-multus-ds-amd64-vtvhp started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:48.603: INFO: Container kube-multus ready: true, restart count 1 Jun 17 23:23:48.603: INFO: node-exporter-tv8q4 started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 17 23:23:48.603: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 23:23:48.603: INFO: Container node-exporter ready: true, restart count 0 Jun 17 23:23:48.603: INFO: kube-apiserver-master3 started at 2022-06-17 20:00:05 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:48.603: INFO: Container kube-apiserver ready: true, restart count 0 Jun 17 23:23:48.603: INFO: kube-scheduler-master3 started at 2022-06-17 20:08:07 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:48.603: INFO: Container kube-scheduler ready: true, restart count 2 Jun 17 23:23:48.603: INFO: kube-proxy-qw2lh started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:48.603: INFO: Container kube-proxy ready: true, restart count 1 Jun 17 23:23:48.603: INFO: kube-flannel-7sp2w started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 17 23:23:48.603: INFO: Init container install-cni ready: true, restart count 0 Jun 17 23:23:48.603: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 23:23:48.603: INFO: kube-controller-manager-master3 started at 2022-06-17 20:08:07 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:48.603: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 17 23:23:48.603: INFO: coredns-8474476ff8-plfdq started at 2022-06-17 20:02:18 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:48.603: INFO: Container coredns ready: true, restart count 1 Jun 17 23:23:48.603: INFO: prometheus-operator-585ccfb458-kz9ss started at 2022-06-17 20:14:47 +0000 UTC (0+2 container statuses recorded) Jun 17 23:23:48.603: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 23:23:48.603: INFO: Container prometheus-operator ready: true, restart count 0 Jun 17 23:23:48.698: INFO: Latency metrics for node master3 Jun 17 23:23:48.698: INFO: Logging node info for node node1 Jun 17 23:23:48.701: INFO: Node Info: &Node{ObjectMeta:{node1 2db3a28c-448f-4511-9db8-4ef739b681b1 76518 0 2022-06-17 20:00:39 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-17 20:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-06-17 20:00:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-17 20:01:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-17 20:09:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-17 20:13:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-17 22:24:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:34 +0000 UTC,LastTransitionTime:2022-06-17 20:04:34 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 23:23:48 +0000 UTC,LastTransitionTime:2022-06-17 20:00:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 23:23:48 +0000 UTC,LastTransitionTime:2022-06-17 20:00:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 23:23:48 +0000 UTC,LastTransitionTime:2022-06-17 20:00:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 23:23:48 +0000 UTC,LastTransitionTime:2022-06-17 20:01:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b4b206100a5d45e9959c4a79c836676a,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:5a19e1a7-8d9a-4724-83a4-bd77b1a0f8f4,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1007077455,},ContainerImage{Names:[localhost:30500/cmk@sha256:7227e64d78c2a9dd290de0ec1cbbaf536dad977fc3efca629dc87d6ffb97071e cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:160595fccf5ad4e41cc0a7acf56027802bf1a2310e704f6505baf0f88746e277 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.7],SizeBytes:60182103,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:21d7abd21ac65aac7d19aaa2b1b05a71e496b7bf6251c76df58855be9c3aaa59 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[localhost:30500/tasextender@sha256:a226a9c613b9eeed89115dd78ba697306e50d1b4466033c8415371714720c861 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 17 23:23:48.702: INFO: Logging kubelet events for node node1 Jun 17 23:23:48.705: INFO: Logging pods the kubelet thinks is on node node1 Jun 17 23:23:48.717: INFO: cmk-webhook-6c9d5f8578-qcmrd started at 2022-06-17 20:13:52 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:48.717: INFO: Container cmk-webhook ready: true, restart count 0 Jun 17 23:23:48.717: INFO: kube-proxy-t4lqk started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:48.717: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 23:23:48.717: INFO: cmk-xh247 started at 2022-06-17 20:13:51 +0000 UTC (0+2 container statuses recorded) Jun 17 23:23:48.717: INFO: Container nodereport ready: true, restart count 0 Jun 17 23:23:48.717: INFO: Container reconcile ready: true, restart count 0 Jun 17 23:23:48.717: INFO: nginx-proxy-node1 started at 2022-06-17 20:00:39 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:48.717: INFO: Container nginx-proxy ready: true, restart count 2 Jun 17 23:23:48.717: INFO: kube-multus-ds-amd64-m6vf8 started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:48.717: INFO: Container kube-multus ready: true, restart count 1 Jun 17 23:23:48.717: INFO: back-off-cap started at 2022-06-17 23:23:40 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:48.717: INFO: Container back-off-cap ready: true, restart count 0 Jun 17 23:23:48.717: INFO: kubernetes-dashboard-785dcbb76d-26kg6 started at 2022-06-17 20:02:19 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:48.717: INFO: Container kubernetes-dashboard ready: true, restart count 2 Jun 17 23:23:48.717: INFO: tas-telemetry-aware-scheduling-84ff454dfb-tbvjv started at 2022-06-17 20:17:57 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:48.717: INFO: Container tas-extender ready: true, restart count 0 Jun 17 23:23:48.717: INFO: busybox-51538c1b-9de1-41b1-8a85-6f16fcc1a222 started at 2022-06-17 23:23:41 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:48.717: INFO: Container busybox ready: true, restart count 0 Jun 17 23:23:48.717: INFO: node-feature-discovery-worker-dgp4b started at 2022-06-17 20:09:28 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:48.717: INFO: Container nfd-worker ready: true, restart count 0 Jun 17 23:23:48.717: INFO: prometheus-k8s-0 started at 2022-06-17 20:14:56 +0000 UTC (0+4 container statuses recorded) Jun 17 23:23:48.717: INFO: Container config-reloader ready: true, restart count 0 Jun 17 23:23:48.717: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 17 23:23:48.717: INFO: Container grafana ready: true, restart count 0 Jun 17 23:23:48.717: INFO: Container prometheus ready: true, restart count 1 Jun 17 23:23:48.717: INFO: collectd-5src2 started at 2022-06-17 20:18:47 +0000 UTC (0+3 container statuses recorded) Jun 17 23:23:48.717: INFO: Container collectd ready: true, restart count 0 Jun 17 23:23:48.717: INFO: Container collectd-exporter ready: true, restart count 0 Jun 17 23:23:48.717: INFO: Container rbac-proxy ready: true, restart count 0 Jun 17 23:23:48.717: INFO: kube-flannel-wqcwq started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 17 23:23:48.717: INFO: Init container install-cni ready: true, restart count 2 Jun 17 23:23:48.717: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 23:23:48.717: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-whtq2 started at 2022-06-17 20:10:41 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:48.717: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 17 23:23:48.717: INFO: security-context-7c7b4865-f307-4551-8fc4-2624c19f934c started at 2022-06-17 23:23:48 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:48.717: INFO: Container test-container ready: false, restart count 0 Jun 17 23:23:48.717: INFO: cmk-init-discover-node1-bvmrv started at 2022-06-17 20:13:02 +0000 UTC (0+3 container statuses recorded) Jun 17 23:23:48.717: INFO: Container discover ready: false, restart count 0 Jun 17 23:23:48.717: INFO: Container init ready: false, restart count 0 Jun 17 23:23:48.717: INFO: Container install ready: false, restart count 0 Jun 17 23:23:48.717: INFO: node-exporter-8ftgl started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 17 23:23:48.717: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 23:23:48.717: INFO: Container node-exporter ready: true, restart count 0 Jun 17 23:23:48.717: INFO: dapi-test-pod started at 2022-06-17 23:23:40 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:48.717: INFO: Container test-container ready: false, restart count 0 Jun 17 23:23:51.282: INFO: Latency metrics for node node1 Jun 17 23:23:51.282: INFO: Logging node info for node node2 Jun 17 23:23:51.284: INFO: Node Info: &Node{ObjectMeta:{node2 467d2582-10f7-475b-9f20-5b7c2e46267a 76422 0 2022-06-17 20:00:37 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-17 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-06-17 20:00:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-17 20:01:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-17 20:09:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-17 20:13:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-17 22:24:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2022-06-17 23:05:09 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:33 +0000 UTC,LastTransitionTime:2022-06-17 20:04:33 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 23:23:44 +0000 UTC,LastTransitionTime:2022-06-17 20:00:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 23:23:44 +0000 UTC,LastTransitionTime:2022-06-17 20:00:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 23:23:44 +0000 UTC,LastTransitionTime:2022-06-17 20:00:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 23:23:44 +0000 UTC,LastTransitionTime:2022-06-17 20:04:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3b9e31fbb30d4e48b9ac063755a76deb,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:5cd4c1a7-c6ca-496c-9122-4f944da708e6,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[localhost:30500/cmk@sha256:7227e64d78c2a9dd290de0ec1cbbaf536dad977fc3efca629dc87d6ffb97071e localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:21d7abd21ac65aac7d19aaa2b1b05a71e496b7bf6251c76df58855be9c3aaa59 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 17 23:23:51.285: INFO: Logging kubelet events for node node2 Jun 17 23:23:51.288: INFO: Logging pods the kubelet thinks is on node node2 Jun 17 23:23:51.300: INFO: privileged-pod started at 2022-06-17 23:23:48 +0000 UTC (0+2 container statuses recorded) Jun 17 23:23:51.300: INFO: Container not-privileged-container ready: false, restart count 0 Jun 17 23:23:51.300: INFO: Container privileged-container ready: false, restart count 0 Jun 17 23:23:51.300: INFO: secret-test-pod started at 2022-06-17 23:23:41 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:51.300: INFO: Container test-container ready: false, restart count 0 Jun 17 23:23:51.300: INFO: node-feature-discovery-worker-82r46 started at 2022-06-17 20:09:28 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:51.300: INFO: Container nfd-worker ready: true, restart count 0 Jun 17 23:23:51.300: INFO: cmk-init-discover-node2-z2vgz started at 2022-06-17 20:13:25 +0000 UTC (0+3 container statuses recorded) Jun 17 23:23:51.300: INFO: Container discover ready: false, restart count 0 Jun 17 23:23:51.300: INFO: Container init ready: false, restart count 0 Jun 17 23:23:51.300: INFO: Container install ready: false, restart count 0 Jun 17 23:23:51.300: INFO: kube-multus-ds-amd64-hblk4 started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:51.300: INFO: Container kube-multus ready: true, restart count 1 Jun 17 23:23:51.300: INFO: cmk-5gtjq started at 2022-06-17 20:13:52 +0000 UTC (0+2 container statuses recorded) Jun 17 23:23:51.300: INFO: Container nodereport ready: true, restart count 0 Jun 17 23:23:51.300: INFO: Container reconcile ready: true, restart count 0 Jun 17 23:23:51.300: INFO: collectd-6bcqz started at 2022-06-17 20:18:47 +0000 UTC (0+3 container statuses recorded) Jun 17 23:23:51.300: INFO: Container collectd ready: true, restart count 0 Jun 17 23:23:51.301: INFO: Container collectd-exporter ready: true, restart count 0 Jun 17 23:23:51.301: INFO: Container rbac-proxy ready: true, restart count 0 Jun 17 23:23:51.301: INFO: busybox-readonly-true-7d7ee13d-e434-4497-be2e-9cc686860d87 started at 2022-06-17 23:23:40 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:51.301: INFO: Container busybox-readonly-true-7d7ee13d-e434-4497-be2e-9cc686860d87 ready: false, restart count 0 Jun 17 23:23:51.301: INFO: implicit-nonroot-uid started at 2022-06-17 23:23:40 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:51.301: INFO: Container implicit-nonroot-uid ready: false, restart count 0 Jun 17 23:23:51.301: INFO: startup-f20264cd-adbe-45d8-8ca3-e39ab6e9774a started at 2022-06-17 23:23:40 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:51.301: INFO: Container busybox ready: false, restart count 0 Jun 17 23:23:51.301: INFO: nginx-proxy-node2 started at 2022-06-17 20:00:37 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:51.301: INFO: Container nginx-proxy ready: true, restart count 2 Jun 17 23:23:51.301: INFO: kube-proxy-pvtj6 started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:51.301: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 23:23:51.301: INFO: liveness-http started at 2022-06-17 23:23:49 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:51.301: INFO: Container liveness-http ready: false, restart count 0 Jun 17 23:23:51.301: INFO: busybox-privileged-true-1de8fca6-488c-44c7-843e-331363e6eed9 started at 2022-06-17 23:23:40 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:51.301: INFO: Container busybox-privileged-true-1de8fca6-488c-44c7-843e-331363e6eed9 ready: false, restart count 0 Jun 17 23:23:51.301: INFO: kubernetes-metrics-scraper-5558854cb-w4nk8 started at 2022-06-17 20:02:19 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:51.301: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 17 23:23:51.301: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-xr9c5 started at 2022-06-17 20:10:41 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:51.301: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 17 23:23:51.301: INFO: node-exporter-xgz6d started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 17 23:23:51.301: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 23:23:51.301: INFO: Container node-exporter ready: true, restart count 0 Jun 17 23:23:51.301: INFO: kube-flannel-plbl8 started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 17 23:23:51.301: INFO: Init container install-cni ready: true, restart count 2 Jun 17 23:23:51.301: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 23:23:51.301: INFO: startup-150b5606-7420-44e9-b960-9ecac3bac981 started at 2022-06-17 23:23:40 +0000 UTC (0+1 container statuses recorded) Jun 17 23:23:51.301: INFO: Container busybox ready: false, restart count 0 Jun 17 23:23:52.514: INFO: Latency metrics for node node2 Jun 17 23:23:52.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5958" for this suite. •! Panic [12.213 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be ready immediately after startupProbe succeeds [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:400 Test Panicked runtime error: invalid memory address or nil pointer dereference /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 Full Stack Trace k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x109 panic(0x654af00, 0x9c066c0) /usr/local/go/src/runtime/panic.go:965 +0x1b9 k8s.io/kubernetes/test/e2e/framework/pod.podContainerStarted.func1(0xc000bf6f0c, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/resource.go:334 +0x17b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc001354340, 0xc000bf6f00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitFor(0xc000d099c8, 0xc001354340, 0xc0010c4240, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:541 +0x128 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollInternal(0xc000d099c8, 0xc001354340, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:427 +0x87 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc000d099c8, 0xc001354340, 0xc000d099c8, 0xc001354340) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:452 +0x74 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc001354340, 0x14, 0xc003840ff0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodContainerStarted(0x77b33d8, 0xc003bb7600, 0xc000d09620, 0x14, 0xc003840ff0, 0x2c, 0x0, 0x45d964b800, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/wait.go:554 +0x92 k8s.io/kubernetes/test/e2e/common/node.glob..func2.18() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:417 +0x39e k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001539380) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001539380) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001539380, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:23:40.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples W0617 23:23:40.457321 33 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 17 23:23:40.457: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 17 23:23:40.459: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Jun 17 23:23:40.467: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 STEP: creating secret and pod Jun 17 23:23:40.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-3049 create -f -' Jun 17 23:23:40.912: INFO: stderr: "" Jun 17 23:23:40.912: INFO: stdout: "secret/test-secret created\n" Jun 17 23:23:40.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-3049 create -f -' Jun 17 23:23:41.268: INFO: stderr: "" Jun 17 23:23:41.268: INFO: stdout: "pod/secret-test-pod created\n" STEP: checking if secret was read correctly Jun 17 23:23:53.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-3049 logs secret-test-pod test-container' Jun 17 23:23:53.463: INFO: stderr: "" Jun 17 23:23:53.463: INFO: stdout: "content of file \"/etc/secret-volume/data-1\": value-1\r\n\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:23:53.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-3049" for this suite. • [SLOW TEST:13.037 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:113 should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Secret should create a pod that reads a secret","total":-1,"completed":1,"skipped":69,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:23:48.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Jun 17 23:23:48.205: INFO: Waiting up to 5m0s for pod "security-context-7c7b4865-f307-4551-8fc4-2624c19f934c" in namespace "security-context-7855" to be "Succeeded or Failed" Jun 17 23:23:48.207: INFO: Pod "security-context-7c7b4865-f307-4551-8fc4-2624c19f934c": Phase="Pending", Reason="", readiness=false. Elapsed: 1.992819ms Jun 17 23:23:50.211: INFO: Pod "security-context-7c7b4865-f307-4551-8fc4-2624c19f934c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00618456s Jun 17 23:23:52.215: INFO: Pod "security-context-7c7b4865-f307-4551-8fc4-2624c19f934c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010212549s Jun 17 23:23:54.220: INFO: Pod "security-context-7c7b4865-f307-4551-8fc4-2624c19f934c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014388525s Jun 17 23:23:56.225: INFO: Pod "security-context-7c7b4865-f307-4551-8fc4-2624c19f934c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.020263973s STEP: Saw pod success Jun 17 23:23:56.225: INFO: Pod "security-context-7c7b4865-f307-4551-8fc4-2624c19f934c" satisfied condition "Succeeded or Failed" Jun 17 23:23:56.228: INFO: Trying to get logs from node node1 pod security-context-7c7b4865-f307-4551-8fc4-2624c19f934c container test-container: STEP: delete the pod Jun 17 23:23:56.242: INFO: Waiting for pod security-context-7c7b4865-f307-4551-8fc4-2624c19f934c to disappear Jun 17 23:23:56.244: INFO: Pod security-context-7c7b4865-f307-4551-8fc4-2624c19f934c no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:23:56.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-7855" for this suite. • [SLOW TEST:8.083 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]","total":-1,"completed":3,"skipped":171,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:23:52.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:159 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:23:56.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5193" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":2,"skipped":104,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:23:56.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:23:59.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5850" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":4,"skipped":186,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:23:48.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-privileged-pod STEP: Waiting for a default service account to be provisioned in namespace [It] should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49 STEP: Creating a pod with a privileged container Jun 17 23:23:48.469: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Jun 17 23:23:50.473: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Jun 17 23:23:52.473: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Jun 17 23:23:54.474: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Jun 17 23:23:56.475: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Jun 17 23:23:58.473: INFO: The status of Pod privileged-pod is Running (Ready = true) STEP: Executing in the privileged container Jun 17 23:23:58.476: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-6399 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:23:58.476: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:23:58.734: INFO: ExecWithOptions {Command:[ip link del dummy1] Namespace:e2e-privileged-pod-6399 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:23:58.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Executing in the non-privileged container Jun 17 23:23:59.154: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-6399 PodName:privileged-pod ContainerName:not-privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:23:59.154: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:23:59.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-privileged-pod-6399" for this suite. • [SLOW TEST:10.938 seconds] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":2,"skipped":94,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:23:59.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should create and update a lease in the kube-node-lease namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:50 STEP: check that lease for this Kubelet exists in the kube-node-lease namespace STEP: check that node lease is updated at least once within the lease duration [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:23:59.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-9410" for this suite. •S ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":5,"skipped":309,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:23:52.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Jun 17 23:23:52.801: INFO: Waiting up to 5m0s for pod "security-context-76e77ad3-fed8-4c4b-a4b8-aabc45e79497" in namespace "security-context-4495" to be "Succeeded or Failed" Jun 17 23:23:52.803: INFO: Pod "security-context-76e77ad3-fed8-4c4b-a4b8-aabc45e79497": Phase="Pending", Reason="", readiness=false. Elapsed: 1.889236ms Jun 17 23:23:54.807: INFO: Pod "security-context-76e77ad3-fed8-4c4b-a4b8-aabc45e79497": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006215473s Jun 17 23:23:56.815: INFO: Pod "security-context-76e77ad3-fed8-4c4b-a4b8-aabc45e79497": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014641375s Jun 17 23:23:58.820: INFO: Pod "security-context-76e77ad3-fed8-4c4b-a4b8-aabc45e79497": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019251573s Jun 17 23:24:00.826: INFO: Pod "security-context-76e77ad3-fed8-4c4b-a4b8-aabc45e79497": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.024884049s STEP: Saw pod success Jun 17 23:24:00.826: INFO: Pod "security-context-76e77ad3-fed8-4c4b-a4b8-aabc45e79497" satisfied condition "Succeeded or Failed" Jun 17 23:24:00.829: INFO: Trying to get logs from node node2 pod security-context-76e77ad3-fed8-4c4b-a4b8-aabc45e79497 container test-container: STEP: delete the pod Jun 17 23:24:00.842: INFO: Waiting for pod security-context-76e77ad3-fed8-4c4b-a4b8-aabc45e79497 to disappear Jun 17 23:24:00.845: INFO: Pod security-context-76e77ad3-fed8-4c4b-a4b8-aabc45e79497 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:24:00.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-4495" for this suite. • [SLOW TEST:8.088 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":1,"skipped":147,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:23:59.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:201 STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node STEP: Watching for error events or started pod STEP: Checking that the pod was rejected [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:24:01.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-1555" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]","total":-1,"completed":6,"skipped":460,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:23:59.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 Jun 17 23:23:59.685: INFO: Waiting up to 5m0s for pod "busybox-user-0-568c8ae9-8d79-4886-94d3-f8ea5b94f169" in namespace "security-context-test-5061" to be "Succeeded or Failed" Jun 17 23:23:59.688: INFO: Pod "busybox-user-0-568c8ae9-8d79-4886-94d3-f8ea5b94f169": Phase="Pending", Reason="", readiness=false. Elapsed: 2.434464ms Jun 17 23:24:01.692: INFO: Pod "busybox-user-0-568c8ae9-8d79-4886-94d3-f8ea5b94f169": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00645513s Jun 17 23:24:03.695: INFO: Pod "busybox-user-0-568c8ae9-8d79-4886-94d3-f8ea5b94f169": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010264689s Jun 17 23:24:03.695: INFO: Pod "busybox-user-0-568c8ae9-8d79-4886-94d3-f8ea5b94f169" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:24:03.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5061" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":3,"skipped":236,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:24:02.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 17 23:24:06.403: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:24:06.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1004" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":7,"skipped":653,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:24:06.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 Jun 17 23:24:06.476: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:24:06.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-8227" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.038 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 should enforce an AppArmor profile [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:43 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:284 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:24:06.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should reject a Pod requesting a RuntimeClass with conflicting node selector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:41 [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:24:06.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-9573" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector","total":-1,"completed":8,"skipped":783,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:24:00.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 Jun 17 23:24:00.942: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-37a40828-ec77-4780-804c-56b62f29e440" in namespace "security-context-test-6501" to be "Succeeded or Failed" Jun 17 23:24:00.944: INFO: Pod "alpine-nnp-nil-37a40828-ec77-4780-804c-56b62f29e440": Phase="Pending", Reason="", readiness=false. Elapsed: 2.639035ms Jun 17 23:24:02.948: INFO: Pod "alpine-nnp-nil-37a40828-ec77-4780-804c-56b62f29e440": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006738056s Jun 17 23:24:04.955: INFO: Pod "alpine-nnp-nil-37a40828-ec77-4780-804c-56b62f29e440": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0133685s Jun 17 23:24:06.958: INFO: Pod "alpine-nnp-nil-37a40828-ec77-4780-804c-56b62f29e440": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016905173s Jun 17 23:24:06.959: INFO: Pod "alpine-nnp-nil-37a40828-ec77-4780-804c-56b62f29e440" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:24:06.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6501" for this suite. • [SLOW TEST:6.067 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":171,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:24:07.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367 Jun 17 23:24:07.156: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-5dd06880-e21e-42fa-903b-54593bede534" in namespace "security-context-test-6376" to be "Succeeded or Failed" Jun 17 23:24:07.159: INFO: Pod "alpine-nnp-true-5dd06880-e21e-42fa-903b-54593bede534": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03623ms Jun 17 23:24:09.163: INFO: Pod "alpine-nnp-true-5dd06880-e21e-42fa-903b-54593bede534": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006056236s Jun 17 23:24:11.167: INFO: Pod "alpine-nnp-true-5dd06880-e21e-42fa-903b-54593bede534": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01036253s Jun 17 23:24:11.167: INFO: Pod "alpine-nnp-true-5dd06880-e21e-42fa-903b-54593bede534" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:24:11.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6376" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":3,"skipped":249,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:23:53.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:477 STEP: Creating pod startup-override-3d0a2497-3ac9-4b78-8a23-a756e7a59a6f in namespace container-probe-7029 Jun 17 23:24:01.700: INFO: Started pod startup-override-3d0a2497-3ac9-4b78-8a23-a756e7a59a6f in namespace container-probe-7029 STEP: checking the pod's current state and verifying that restartCount is present Jun 17 23:24:01.702: INFO: Initial restart count of pod startup-override-3d0a2497-3ac9-4b78-8a23-a756e7a59a6f is 1 Jun 17 23:24:23.754: INFO: Restart count of pod container-probe-7029/startup-override-3d0a2497-3ac9-4b78-8a23-a756e7a59a6f is now 2 (22.052307609s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:24:23.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7029" for this suite. • [SLOW TEST:30.111 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:477 ------------------------------ {"msg":"PASSED [sig-node] Probing container should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod]","total":-1,"completed":2,"skipped":165,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:24:06.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:449 STEP: Creating pod liveness-override-498c071d-b571-42c7-a974-1e0278dd049f in namespace container-probe-6356 Jun 17 23:24:10.888: INFO: Started pod liveness-override-498c071d-b571-42c7-a974-1e0278dd049f in namespace container-probe-6356 STEP: checking the pod's current state and verifying that restartCount is present Jun 17 23:24:10.890: INFO: Initial restart count of pod liveness-override-498c071d-b571-42c7-a974-1e0278dd049f is 1 Jun 17 23:24:32.950: INFO: Restart count of pod container-probe-6356/liveness-override-498c071d-b571-42c7-a974-1e0278dd049f is now 2 (22.059861566s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:24:32.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6356" for this suite. • [SLOW TEST:26.118 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:449 ------------------------------ {"msg":"PASSED [sig-node] Probing container should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod]","total":-1,"completed":9,"skipped":817,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:23:41.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a failing exec liveness probe that took longer than the timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:254 STEP: Creating pod busybox-51538c1b-9de1-41b1-8a85-6f16fcc1a222 in namespace container-probe-2341 Jun 17 23:23:47.275: INFO: Started pod busybox-51538c1b-9de1-41b1-8a85-6f16fcc1a222 in namespace container-probe-2341 STEP: checking the pod's current state and verifying that restartCount is present Jun 17 23:23:47.278: INFO: Initial restart count of pod busybox-51538c1b-9de1-41b1-8a85-6f16fcc1a222 is 0 Jun 17 23:24:37.378: INFO: Restart count of pod container-probe-2341/busybox-51538c1b-9de1-41b1-8a85-6f16fcc1a222 is now 1 (50.099648275s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:24:37.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2341" for this suite. • [SLOW TEST:56.155 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a failing exec liveness probe that took longer than the timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:254 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout","total":-1,"completed":2,"skipped":392,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:24:11.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:274 [BeforeEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:295 [It] kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 STEP: Creating a RC of 20 pods and wait until all pods of this RC are running STEP: creating replication controller cleanup20-b90384c8-8c67-4f21-a1b0-26b4a3ec68ca in namespace kubelet-1388 I0617 23:24:11.465877 26 runners.go:190] Created replication controller with name: cleanup20-b90384c8-8c67-4f21-a1b0-26b4a3ec68ca, namespace: kubelet-1388, replica count: 20 I0617 23:24:21.516709 26 runners.go:190] cleanup20-b90384c8-8c67-4f21-a1b0-26b4a3ec68ca Pods: 20 out of 20 created, 3 running, 17 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0617 23:24:31.517383 26 runners.go:190] cleanup20-b90384c8-8c67-4f21-a1b0-26b4a3ec68ca Pods: 20 out of 20 created, 20 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 17 23:24:32.519: INFO: Checking pods on node node2 via /runningpods endpoint Jun 17 23:24:32.519: INFO: Checking pods on node node1 via /runningpods endpoint Jun 17 23:24:32.541: INFO: Resource usage on node "node1": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 1.256 6177.08 2205.53 "runtime" 0.653 2703.38 738.85 "kubelet" 0.653 2703.38 738.85 Resource usage on node "node2": container cpu(cores) memory_working_set(MB) memory_rss(MB) "runtime" 1.025 1493.89 519.56 "kubelet" 1.025 1493.89 519.56 "/" 1.806 4105.56 1252.70 Resource usage on node "master1": container cpu(cores) memory_working_set(MB) memory_rss(MB) "kubelet" 0.130 640.62 268.24 "/" 0.332 4739.88 1591.61 "runtime" 0.130 640.62 268.24 Resource usage on node "master2": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.504 3946.62 1774.82 "runtime" 0.110 663.82 297.22 "kubelet" 0.110 663.82 297.22 Resource usage on node "master3": container cpu(cores) memory_working_set(MB) memory_rss(MB) "runtime" 0.088 548.71 264.66 "kubelet" 0.088 548.71 264.66 "/" 0.348 3579.20 1587.93 STEP: Deleting the RC STEP: deleting ReplicationController cleanup20-b90384c8-8c67-4f21-a1b0-26b4a3ec68ca in namespace kubelet-1388, will wait for the garbage collector to delete the pods Jun 17 23:24:32.601: INFO: Deleting ReplicationController cleanup20-b90384c8-8c67-4f21-a1b0-26b4a3ec68ca took: 7.03357ms Jun 17 23:24:33.202: INFO: Terminating ReplicationController cleanup20-b90384c8-8c67-4f21-a1b0-26b4a3ec68ca pods took: 600.991397ms Jun 17 23:24:52.604: INFO: Checking pods on node node2 via /runningpods endpoint Jun 17 23:24:52.604: INFO: Checking pods on node node1 via /runningpods endpoint Jun 17 23:24:52.916: INFO: Deleting 20 pods on 2 nodes completed in 1.313347558s after the RC was deleted Jun 17 23:24:52.917: INFO: CPU usage of containers on node "node2" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 0.920 1.518 1.518 1.518 1.518 "runtime" 0.000 0.000 0.693 0.693 0.693 0.693 0.693 "kubelet" 0.000 0.000 0.693 0.693 0.693 0.693 0.693 CPU usage of containers on node "master1" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 0.367 0.367 0.386 0.386 0.386 "runtime" 0.000 0.000 0.116 0.116 0.116 0.116 0.116 "kubelet" 0.000 0.000 0.116 0.116 0.116 0.116 0.116 CPU usage of containers on node "master2" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 0.504 0.504 0.716 0.716 0.716 "runtime" 0.000 0.000 0.109 0.110 0.110 0.110 0.110 "kubelet" 0.000 0.000 0.109 0.110 0.110 0.110 0.110 CPU usage of containers on node "master3" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 0.369 0.369 0.408 0.408 0.408 "runtime" 0.000 0.000 0.088 0.089 0.089 0.089 0.089 "kubelet" 0.000 0.000 0.088 0.089 0.089 0.089 0.089 CPU usage of containers on node "node1" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 1.256 1.256 1.520 1.520 1.520 "runtime" 0.000 0.000 0.477 0.586 0.586 0.586 0.586 "kubelet" 0.000 0.000 0.477 0.586 0.586 0.586 0.586 [AfterEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:326 STEP: removing the label kubelet_cleanup off the node node1 STEP: verifying the node doesn't have the label kubelet_cleanup STEP: removing the label kubelet_cleanup off the node node2 STEP: verifying the node doesn't have the label kubelet_cleanup [AfterEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:24:52.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-1388" for this suite. • [SLOW TEST:41.548 seconds] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:279 kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 ------------------------------ {"msg":"PASSED [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":4,"skipped":333,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:23:53.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:217 STEP: Creating pod busybox-986c39b1-cc88-492f-88f0-d2d26d451a6e in namespace container-probe-6730 Jun 17 23:24:01.192: INFO: Started pod busybox-986c39b1-cc88-492f-88f0-d2d26d451a6e in namespace container-probe-6730 STEP: checking the pod's current state and verifying that restartCount is present Jun 17 23:24:01.194: INFO: Initial restart count of pod busybox-986c39b1-cc88-492f-88f0-d2d26d451a6e is 0 Jun 17 23:24:53.308: INFO: Restart count of pod container-probe-6730/busybox-986c39b1-cc88-492f-88f0-d2d26d451a6e is now 1 (52.114170952s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:24:53.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6730" for this suite. • [SLOW TEST:60.168 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:217 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":2,"skipped":398,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:23:56.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:371 STEP: Creating pod startup-e49ffb16-c713-46c5-b52e-e16fbbbcfefa in namespace container-probe-1533 Jun 17 23:24:02.689: INFO: Started pod startup-e49ffb16-c713-46c5-b52e-e16fbbbcfefa in namespace container-probe-1533 STEP: checking the pod's current state and verifying that restartCount is present Jun 17 23:24:02.692: INFO: Initial restart count of pod startup-e49ffb16-c713-46c5-b52e-e16fbbbcfefa is 0 Jun 17 23:25:00.941: INFO: Restart count of pod container-probe-1533/startup-e49ffb16-c713-46c5-b52e-e16fbbbcfefa is now 1 (58.249301442s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:25:00.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1533" for this suite. • [SLOW TEST:64.315 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:371 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted by liveness probe after startup probe enables it","total":-1,"completed":3,"skipped":123,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:23:48.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Jun 17 23:23:48.573: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 Jun 17 23:23:48.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-236 create -f -' Jun 17 23:23:49.045: INFO: stderr: "" Jun 17 23:23:49.045: INFO: stdout: "pod/liveness-exec created\n" Jun 17 23:23:49.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-236 create -f -' Jun 17 23:23:49.373: INFO: stderr: "" Jun 17 23:23:49.373: INFO: stdout: "pod/liveness-http created\n" STEP: Check restarts Jun 17 23:23:55.382: INFO: Pod: liveness-exec, restart count:0 Jun 17 23:23:57.382: INFO: Pod: liveness-http, restart count:0 Jun 17 23:23:57.385: INFO: Pod: liveness-exec, restart count:0 Jun 17 23:23:59.385: INFO: Pod: liveness-http, restart count:0 Jun 17 23:23:59.388: INFO: Pod: liveness-exec, restart count:0 Jun 17 23:24:01.391: INFO: Pod: liveness-http, restart count:0 Jun 17 23:24:01.395: INFO: Pod: liveness-exec, restart count:0 Jun 17 23:24:03.397: INFO: Pod: liveness-exec, restart count:0 Jun 17 23:24:03.397: INFO: Pod: liveness-http, restart count:0 Jun 17 23:24:05.401: INFO: Pod: liveness-exec, restart count:0 Jun 17 23:24:05.401: INFO: Pod: liveness-http, restart count:0 Jun 17 23:24:07.405: INFO: Pod: liveness-http, restart count:0 Jun 17 23:24:07.405: INFO: Pod: liveness-exec, restart count:0 Jun 17 23:24:09.408: INFO: Pod: liveness-http, restart count:0 Jun 17 23:24:09.408: INFO: Pod: liveness-exec, restart count:0 Jun 17 23:24:11.412: INFO: Pod: liveness-http, restart count:0 Jun 17 23:24:11.412: INFO: Pod: liveness-exec, restart count:0 Jun 17 23:24:13.416: INFO: Pod: liveness-exec, restart count:0 Jun 17 23:24:13.416: INFO: Pod: liveness-http, restart count:0 Jun 17 23:24:15.420: INFO: Pod: liveness-http, restart count:0 Jun 17 23:24:15.420: INFO: Pod: liveness-exec, restart count:0 Jun 17 23:24:17.424: INFO: Pod: liveness-http, restart count:0 Jun 17 23:24:17.424: INFO: Pod: liveness-exec, restart count:0 Jun 17 23:24:19.427: INFO: Pod: liveness-exec, restart count:0 Jun 17 23:24:19.427: INFO: Pod: liveness-http, restart count:0 Jun 17 23:24:21.431: INFO: Pod: liveness-http, restart count:0 Jun 17 23:24:21.431: INFO: Pod: liveness-exec, restart count:0 Jun 17 23:24:23.439: INFO: Pod: liveness-http, restart count:0 Jun 17 23:24:23.439: INFO: Pod: liveness-exec, restart count:0 Jun 17 23:24:25.443: INFO: Pod: liveness-http, restart count:0 Jun 17 23:24:25.443: INFO: Pod: liveness-exec, restart count:0 Jun 17 23:24:27.447: INFO: Pod: liveness-exec, restart count:0 Jun 17 23:24:27.447: INFO: Pod: liveness-http, restart count:0 Jun 17 23:24:29.451: INFO: Pod: liveness-exec, restart count:0 Jun 17 23:24:29.451: INFO: Pod: liveness-http, restart count:0 Jun 17 23:24:31.456: INFO: Pod: liveness-exec, restart count:0 Jun 17 23:24:31.456: INFO: Pod: liveness-http, restart count:1 Jun 17 23:24:31.456: INFO: Saw liveness-http restart, succeeded... Jun 17 23:24:33.461: INFO: Pod: liveness-exec, restart count:0 Jun 17 23:24:35.463: INFO: Pod: liveness-exec, restart count:0 Jun 17 23:24:37.467: INFO: Pod: liveness-exec, restart count:0 Jun 17 23:24:39.472: INFO: Pod: liveness-exec, restart count:0 Jun 17 23:24:41.478: INFO: Pod: liveness-exec, restart count:0 Jun 17 23:24:43.481: INFO: Pod: liveness-exec, restart count:0 Jun 17 23:24:45.485: INFO: Pod: liveness-exec, restart count:0 Jun 17 23:24:47.494: INFO: Pod: liveness-exec, restart count:0 Jun 17 23:24:49.499: INFO: Pod: liveness-exec, restart count:0 Jun 17 23:24:51.506: INFO: Pod: liveness-exec, restart count:0 Jun 17 23:24:53.510: INFO: Pod: liveness-exec, restart count:0 Jun 17 23:24:55.515: INFO: Pod: liveness-exec, restart count:0 Jun 17 23:24:57.520: INFO: Pod: liveness-exec, restart count:0 Jun 17 23:24:59.527: INFO: Pod: liveness-exec, restart count:0 Jun 17 23:25:01.536: INFO: Pod: liveness-exec, restart count:1 Jun 17 23:25:01.536: INFO: Saw liveness-exec restart, succeeded... [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:25:01.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-236" for this suite. • [SLOW TEST:72.999 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Liveness /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:66 liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Liveness liveness pods should be automatically restarted","total":-1,"completed":2,"skipped":94,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:24:53.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Jun 17 23:24:53.704: INFO: Waiting up to 5m0s for pod "security-context-58071b98-c949-4baf-9ab4-a02de7b0c354" in namespace "security-context-4177" to be "Succeeded or Failed" Jun 17 23:24:53.706: INFO: Pod "security-context-58071b98-c949-4baf-9ab4-a02de7b0c354": Phase="Pending", Reason="", readiness=false. Elapsed: 2.229ms Jun 17 23:24:55.710: INFO: Pod "security-context-58071b98-c949-4baf-9ab4-a02de7b0c354": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006132267s Jun 17 23:24:57.714: INFO: Pod "security-context-58071b98-c949-4baf-9ab4-a02de7b0c354": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010253354s Jun 17 23:24:59.719: INFO: Pod "security-context-58071b98-c949-4baf-9ab4-a02de7b0c354": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015021915s Jun 17 23:25:01.723: INFO: Pod "security-context-58071b98-c949-4baf-9ab4-a02de7b0c354": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.019176985s STEP: Saw pod success Jun 17 23:25:01.723: INFO: Pod "security-context-58071b98-c949-4baf-9ab4-a02de7b0c354" satisfied condition "Succeeded or Failed" Jun 17 23:25:01.726: INFO: Trying to get logs from node node2 pod security-context-58071b98-c949-4baf-9ab4-a02de7b0c354 container test-container: STEP: delete the pod Jun 17 23:25:01.740: INFO: Waiting for pod security-context-58071b98-c949-4baf-9ab4-a02de7b0c354 to disappear Jun 17 23:25:01.742: INFO: Pod security-context-58071b98-c949-4baf-9ab4-a02de7b0c354 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:25:01.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-4177" for this suite. • [SLOW TEST:8.081 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169 ------------------------------ SSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:24:33.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274 STEP: Creating pod liveness-d1b0d12e-3843-45dd-91e0-b10c6d51f29e in namespace container-probe-946 Jun 17 23:24:49.077: INFO: Started pod liveness-d1b0d12e-3843-45dd-91e0-b10c6d51f29e in namespace container-probe-946 STEP: checking the pod's current state and verifying that restartCount is present Jun 17 23:24:49.079: INFO: Initial restart count of pod liveness-d1b0d12e-3843-45dd-91e0-b10c6d51f29e is 0 Jun 17 23:25:07.120: INFO: Restart count of pod container-probe-946/liveness-d1b0d12e-3843-45dd-91e0-b10c6d51f29e is now 1 (18.041359801s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:25:07.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-946" for this suite. • [SLOW TEST:34.100 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":10,"skipped":849,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:25:01.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:446 [It] should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 STEP: creating the pod that should always exit 0 STEP: submitting the pod to kubernetes Jun 17 23:25:01.420: INFO: Waiting up to 5m0s for pod "pod-always-succeed58a863b3-dc56-49f7-bae0-0af948c91edd" in namespace "pods-7657" to be "Succeeded or Failed" Jun 17 23:25:01.422: INFO: Pod "pod-always-succeed58a863b3-dc56-49f7-bae0-0af948c91edd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040657ms Jun 17 23:25:03.425: INFO: Pod "pod-always-succeed58a863b3-dc56-49f7-bae0-0af948c91edd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005312978s Jun 17 23:25:05.429: INFO: Pod "pod-always-succeed58a863b3-dc56-49f7-bae0-0af948c91edd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008938031s Jun 17 23:25:07.432: INFO: Pod "pod-always-succeed58a863b3-dc56-49f7-bae0-0af948c91edd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012225616s STEP: Saw pod success Jun 17 23:25:07.432: INFO: Pod "pod-always-succeed58a863b3-dc56-49f7-bae0-0af948c91edd" satisfied condition "Succeeded or Failed" STEP: Getting events about the pod STEP: Checking events about the pod STEP: deleting the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:25:09.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7657" for this suite. • [SLOW TEST:8.067 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:444 should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done","total":-1,"completed":4,"skipped":350,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:25:09.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ssh STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:36 Jun 17 23:25:09.528: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:25:09.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ssh-7441" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.038 seconds] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should SSH to all nodes and run commands [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:42 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:25:09.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crictl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:33 Jun 17 23:25:09.717: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:25:09.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crictl-8595" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.029 seconds] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should be able to run crictl on the node [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:40 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:35 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:24:53.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777 STEP: submitting the pod to kubernetes STEP: patching pod status with condition "k8s.io/test-condition1" to true STEP: patching pod status with condition "k8s.io/test-condition2" to true STEP: patching pod status with condition "k8s.io/test-condition1" to false [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:25:13.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6918" for this suite. • [SLOW TEST:20.084 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777 ------------------------------ {"msg":"PASSED [sig-node] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":5,"skipped":370,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:25:07.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Jun 17 23:25:07.461: INFO: Waiting up to 5m0s for pod "security-context-86bd8e1c-5fe5-4875-b9d9-a0d55290ec33" in namespace "security-context-7224" to be "Succeeded or Failed" Jun 17 23:25:07.465: INFO: Pod "security-context-86bd8e1c-5fe5-4875-b9d9-a0d55290ec33": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023676ms Jun 17 23:25:09.468: INFO: Pod "security-context-86bd8e1c-5fe5-4875-b9d9-a0d55290ec33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007003014s Jun 17 23:25:11.473: INFO: Pod "security-context-86bd8e1c-5fe5-4875-b9d9-a0d55290ec33": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011271769s Jun 17 23:25:13.475: INFO: Pod "security-context-86bd8e1c-5fe5-4875-b9d9-a0d55290ec33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014131429s STEP: Saw pod success Jun 17 23:25:13.475: INFO: Pod "security-context-86bd8e1c-5fe5-4875-b9d9-a0d55290ec33" satisfied condition "Succeeded or Failed" Jun 17 23:25:13.478: INFO: Trying to get logs from node node2 pod security-context-86bd8e1c-5fe5-4875-b9d9-a0d55290ec33 container test-container: STEP: delete the pod Jun 17 23:25:13.488: INFO: Waiting for pod security-context-86bd8e1c-5fe5-4875-b9d9-a0d55290ec33 to disappear Jun 17 23:25:13.490: INFO: Pod security-context-86bd8e1c-5fe5-4875-b9d9-a0d55290ec33 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:25:13.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-7224" for this suite. • [SLOW TEST:6.071 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]","total":-1,"completed":11,"skipped":1000,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]","total":-1,"completed":3,"skipped":579,"failed":0} [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:25:01.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should run a Pod requesting a RuntimeClass with scheduling without taints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:125 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a label on the found node. STEP: verifying the node has the label foo-3e000acf-46f3-44e0-b747-523b7adc477c bar STEP: verifying the node has the label fizz-8fd26182-c96f-4da6-80d3-bf572a10949c buzz STEP: Trying to create runtimeclass and pod STEP: removing the label fizz-8fd26182-c96f-4da6-80d3-bf572a10949c off the node node2 STEP: verifying the node doesn't have the label fizz-8fd26182-c96f-4da6-80d3-bf572a10949c STEP: removing the label foo-3e000acf-46f3-44e0-b747-523b7adc477c off the node node2 STEP: verifying the node doesn't have the label foo-3e000acf-46f3-44e0-b747-523b7adc477c [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:25:13.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-3411" for this suite. • [SLOW TEST:12.122 seconds] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should run a Pod requesting a RuntimeClass with scheduling without taints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:125 ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling without taints ","total":-1,"completed":4,"skipped":579,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:25:09.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:25:13.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2896" for this suite. •S ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":5,"skipped":549,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:25:14.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups Jun 17 23:25:14.097: INFO: Waiting up to 5m0s for pod "security-context-a3afb4f6-6436-4cf4-a816-43dba6cf3a62" in namespace "security-context-326" to be "Succeeded or Failed" Jun 17 23:25:14.100: INFO: Pod "security-context-a3afb4f6-6436-4cf4-a816-43dba6cf3a62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.91936ms Jun 17 23:25:16.103: INFO: Pod "security-context-a3afb4f6-6436-4cf4-a816-43dba6cf3a62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006102544s Jun 17 23:25:18.109: INFO: Pod "security-context-a3afb4f6-6436-4cf4-a816-43dba6cf3a62": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011915087s Jun 17 23:25:20.112: INFO: Pod "security-context-a3afb4f6-6436-4cf4-a816-43dba6cf3a62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015112946s STEP: Saw pod success Jun 17 23:25:20.112: INFO: Pod "security-context-a3afb4f6-6436-4cf4-a816-43dba6cf3a62" satisfied condition "Succeeded or Failed" Jun 17 23:25:20.115: INFO: Trying to get logs from node node2 pod security-context-a3afb4f6-6436-4cf4-a816-43dba6cf3a62 container test-container: STEP: delete the pod Jun 17 23:25:20.126: INFO: Waiting for pod security-context-a3afb4f6-6436-4cf4-a816-43dba6cf3a62 to disappear Jun 17 23:25:20.128: INFO: Pod security-context-a3afb4f6-6436-4cf4-a816-43dba6cf3a62 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:25:20.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-326" for this suite. • [SLOW TEST:6.075 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":6,"skipped":601,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:25:14.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:25:20.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4009" for this suite. • [SLOW TEST:6.044 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":5,"skipped":714,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:25:01.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:53 [It] should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jun 17 23:25:12.841: INFO: start=2022-06-17 23:25:07.811483311 +0000 UTC m=+89.354466538, now=2022-06-17 23:25:12.84188628 +0000 UTC m=+94.384869675, kubelet pod: {"metadata":{"name":"pod-submit-remove-5af0650f-8be5-455a-a4af-4a951d58f9c3","namespace":"pods-9492","uid":"1068935d-3df0-4847-8e16-edf9821f0618","resourceVersion":"78117","creationTimestamp":"2022-06-17T23:25:01Z","deletionTimestamp":"2022-06-17T23:25:37Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"781400554"},"annotations":{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.188\"\n ],\n \"mac\": \"da:76:38:80:d5:2d\",\n \"default\": true,\n \"dns\": {}\n}]","k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.188\"\n ],\n \"mac\": \"da:76:38:80:d5:2d\",\n \"default\": true,\n \"dns\": {}\n}]","kubernetes.io/config.seen":"2022-06-17T23:25:01.797608288Z","kubernetes.io/config.source":"api","kubernetes.io/psp":"collectd"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2022-06-17T23:25:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"kube-api-access-mw2dq","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","args":["pause"],"resources":{},"volumeMounts":[{"name":"kube-api-access-mw2dq","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"node2","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-06-17T23:25:01Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-06-17T23:25:07Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-06-17T23:25:07Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-06-17T23:25:01Z"}],"hostIP":"10.10.190.208","podIP":"10.244.3.188","podIPs":[{"ip":"10.244.3.188"}],"startTime":"2022-06-17T23:25:01Z","containerStatuses":[{"name":"agnhost-container","state":{"running":{"startedAt":"2022-06-17T23:25:06Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","imageID":"docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1","containerID":"docker://a8113a5df6b8621708578e1a5defa2b7548b80f1f3401101f89f592bb1ad022a","started":true}],"qosClass":"BestEffort"}} Jun 17 23:25:17.830: INFO: start=2022-06-17 23:25:07.811483311 +0000 UTC m=+89.354466538, now=2022-06-17 23:25:17.830852042 +0000 UTC m=+99.373835384, kubelet pod: {"metadata":{"name":"pod-submit-remove-5af0650f-8be5-455a-a4af-4a951d58f9c3","namespace":"pods-9492","uid":"1068935d-3df0-4847-8e16-edf9821f0618","resourceVersion":"78117","creationTimestamp":"2022-06-17T23:25:01Z","deletionTimestamp":"2022-06-17T23:25:37Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"781400554"},"annotations":{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.188\"\n ],\n \"mac\": \"da:76:38:80:d5:2d\",\n \"default\": true,\n \"dns\": {}\n}]","k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.188\"\n ],\n \"mac\": \"da:76:38:80:d5:2d\",\n \"default\": true,\n \"dns\": {}\n}]","kubernetes.io/config.seen":"2022-06-17T23:25:01.797608288Z","kubernetes.io/config.source":"api","kubernetes.io/psp":"collectd"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2022-06-17T23:25:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"kube-api-access-mw2dq","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","args":["pause"],"resources":{},"volumeMounts":[{"name":"kube-api-access-mw2dq","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"node2","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-06-17T23:25:01Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2022-06-17T23:25:14Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2022-06-17T23:25:14Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-06-17T23:25:01Z"}],"hostIP":"10.10.190.208","startTime":"2022-06-17T23:25:01Z","containerStatuses":[{"name":"agnhost-container","state":{"waiting":{"reason":"ContainerCreating"}},"lastState":{"terminated":{"exitCode":137,"reason":"ContainerStatusUnknown","message":"The container could not be located when the pod was deleted. The container used to be Running","startedAt":null,"finishedAt":null}},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","imageID":"","started":false}],"qosClass":"BestEffort"}} Jun 17 23:25:22.827: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:25:22.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9492" for this suite. • [SLOW TEST:21.077 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:51 should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Delete Grace Period should be submitted and removed","total":-1,"completed":3,"skipped":214,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:25:20.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 STEP: create image pull secret STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:25:27.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2387" for this suite. • [SLOW TEST:7.089 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":6,"skipped":1049,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:24:23.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:313 STEP: Creating pod startup-edd87bd2-c933-457c-bfdb-0738b270eda5 in namespace container-probe-3920 Jun 17 23:24:28.022: INFO: Started pod startup-edd87bd2-c933-457c-bfdb-0738b270eda5 in namespace container-probe-3920 STEP: checking the pod's current state and verifying that restartCount is present Jun 17 23:24:28.025: INFO: Initial restart count of pod startup-edd87bd2-c933-457c-bfdb-0738b270eda5 is 0 Jun 17 23:25:36.176: INFO: Restart count of pod container-probe-3920/startup-edd87bd2-c933-457c-bfdb-0738b270eda5 is now 1 (1m8.150874916s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:25:36.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3920" for this suite. • [SLOW TEST:72.211 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:313 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted startup probe fails","total":-1,"completed":3,"skipped":271,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:25:13.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: waiting for pod running STEP: deleting the pod gracefully STEP: verifying the pod is running while in the graceful period termination Jun 17 23:25:37.838: INFO: pod is running [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:25:37.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-7617" for this suite. • [SLOW TEST:24.086 seconds] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 ------------------------------ {"msg":"PASSED [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process","total":-1,"completed":12,"skipped":1141,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:25:36.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109 STEP: Creating a pod to test downward api env vars Jun 17 23:25:36.373: INFO: Waiting up to 5m0s for pod "downward-api-87d2ffee-2770-4109-b181-69d937ee9b72" in namespace "downward-api-1299" to be "Succeeded or Failed" Jun 17 23:25:36.375: INFO: Pod "downward-api-87d2ffee-2770-4109-b181-69d937ee9b72": Phase="Pending", Reason="", readiness=false. Elapsed: 1.945935ms Jun 17 23:25:38.379: INFO: Pod "downward-api-87d2ffee-2770-4109-b181-69d937ee9b72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005790164s STEP: Saw pod success Jun 17 23:25:38.379: INFO: Pod "downward-api-87d2ffee-2770-4109-b181-69d937ee9b72" satisfied condition "Succeeded or Failed" Jun 17 23:25:38.381: INFO: Trying to get logs from node node2 pod downward-api-87d2ffee-2770-4109-b181-69d937ee9b72 container dapi-container: STEP: delete the pod Jun 17 23:25:38.409: INFO: Waiting for pod downward-api-87d2ffee-2770-4109-b181-69d937ee9b72 to disappear Jun 17 23:25:38.413: INFO: Pod downward-api-87d2ffee-2770-4109-b181-69d937ee9b72 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:25:38.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1299" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":4,"skipped":347,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:25:37.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 Jun 17 23:25:37.958: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-2939" to be "Succeeded or Failed" Jun 17 23:25:37.962: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.476701ms Jun 17 23:25:39.965: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007497007s Jun 17 23:25:41.969: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010965524s Jun 17 23:25:43.972: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014043552s Jun 17 23:25:45.974: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.01645748s Jun 17 23:25:45.974: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:25:45.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2939" for this suite. • [SLOW TEST:8.071 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":13,"skipped":1178,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:25:20.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename mount-propagation STEP: Waiting for a default service account to be provisioned in namespace [It] should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 Jun 17 23:25:20.540: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Jun 17 23:25:22.545: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Jun 17 23:25:24.544: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Jun 17 23:25:26.544: INFO: The status of Pod master is Running (Ready = true) Jun 17 23:25:26.559: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jun 17 23:25:28.562: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jun 17 23:25:30.563: INFO: The status of Pod slave is Running (Ready = true) Jun 17 23:25:30.580: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Jun 17 23:25:32.585: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Jun 17 23:25:34.583: INFO: The status of Pod private is Running (Ready = true) Jun 17 23:25:34.598: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Jun 17 23:25:36.602: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Jun 17 23:25:38.601: INFO: The status of Pod default is Running (Ready = true) Jun 17 23:25:38.605: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-6908 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:38.605: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:38.798: INFO: Exec stderr: "" Jun 17 23:25:38.802: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-6908 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:38.802: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:38.894: INFO: Exec stderr: "" Jun 17 23:25:38.897: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-6908 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:38.897: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:39.011: INFO: Exec stderr: "" Jun 17 23:25:39.013: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-6908 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:39.013: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:39.145: INFO: Exec stderr: "" Jun 17 23:25:39.149: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-6908 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:39.149: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:39.235: INFO: Exec stderr: "" Jun 17 23:25:39.237: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-6908 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:39.238: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:39.342: INFO: Exec stderr: "" Jun 17 23:25:39.345: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-6908 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:39.345: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:39.434: INFO: Exec stderr: "" Jun 17 23:25:39.437: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-6908 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:39.437: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:39.525: INFO: Exec stderr: "" Jun 17 23:25:39.528: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-6908 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:39.528: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:39.680: INFO: Exec stderr: "" Jun 17 23:25:39.683: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-6908 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:39.683: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:39.814: INFO: Exec stderr: "" Jun 17 23:25:39.817: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-6908 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:39.817: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:39.922: INFO: Exec stderr: "" Jun 17 23:25:39.925: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-6908 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:39.925: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:40.005: INFO: Exec stderr: "" Jun 17 23:25:40.008: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-6908 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:40.008: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:40.103: INFO: Exec stderr: "" Jun 17 23:25:40.106: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-6908 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:40.106: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:40.190: INFO: Exec stderr: "" Jun 17 23:25:40.193: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-6908 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:40.193: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:40.287: INFO: Exec stderr: "" Jun 17 23:25:40.289: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-6908 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:40.289: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:40.391: INFO: Exec stderr: "" Jun 17 23:25:40.393: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-master /mnt/test/master; echo master > /mnt/test/master/file] Namespace:mount-propagation-6908 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:40.393: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:40.483: INFO: Exec stderr: "" Jun 17 23:25:40.486: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-slave /mnt/test/slave; echo slave > /mnt/test/slave/file] Namespace:mount-propagation-6908 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:40.486: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:40.583: INFO: Exec stderr: "" Jun 17 23:25:40.587: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-private /mnt/test/private; echo private > /mnt/test/private/file] Namespace:mount-propagation-6908 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:40.587: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:40.668: INFO: Exec stderr: "" Jun 17 23:25:40.671: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-default /mnt/test/default; echo default > /mnt/test/default/file] Namespace:mount-propagation-6908 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:40.671: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:40.756: INFO: Exec stderr: "" Jun 17 23:25:42.773: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-6908"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-6908"/host; echo host > "/var/lib/kubelet/mount-propagation-6908"/host/file] Namespace:mount-propagation-6908 PodName:hostexec-node1-46jjg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 17 23:25:42.773: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:42.868: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-6908 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:42.868: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:42.954: INFO: pod slave mount master: stdout: "master", stderr: "" error: Jun 17 23:25:42.956: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-6908 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:42.956: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:43.046: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: Jun 17 23:25:43.049: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-6908 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:43.049: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:43.150: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Jun 17 23:25:43.153: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-6908 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:43.153: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:43.237: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Jun 17 23:25:43.239: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-6908 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:43.239: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:43.326: INFO: pod slave mount host: stdout: "host", stderr: "" error: Jun 17 23:25:43.328: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-6908 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:43.328: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:43.414: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 Jun 17 23:25:43.416: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-6908 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:43.416: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:43.524: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Jun 17 23:25:43.531: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-6908 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:43.531: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:43.612: INFO: pod private mount private: stdout: "private", stderr: "" error: Jun 17 23:25:43.614: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-6908 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:43.615: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:43.703: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Jun 17 23:25:43.705: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-6908 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:43.705: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:43.783: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 Jun 17 23:25:43.786: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-6908 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:43.786: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:43.874: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 Jun 17 23:25:43.877: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-6908 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:43.877: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:43.963: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Jun 17 23:25:43.965: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-6908 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:43.965: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:44.051: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Jun 17 23:25:44.054: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-6908 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:44.054: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:44.143: INFO: pod default mount default: stdout: "default", stderr: "" error: Jun 17 23:25:44.146: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-6908 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:44.146: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:45.515: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 Jun 17 23:25:45.518: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-6908 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:45.518: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:45.680: INFO: pod master mount master: stdout: "master", stderr: "" error: Jun 17 23:25:45.682: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-6908 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:45.682: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:45.775: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Jun 17 23:25:45.779: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-6908 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:45.779: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:45.870: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Jun 17 23:25:45.872: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-6908 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:45.872: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:45.957: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Jun 17 23:25:45.959: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-6908 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:45.959: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:46.050: INFO: pod master mount host: stdout: "host", stderr: "" error: Jun 17 23:25:46.050: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test `cat "/var/lib/kubelet/mount-propagation-6908"/master/file` = master] Namespace:mount-propagation-6908 PodName:hostexec-node1-46jjg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 17 23:25:46.050: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:46.152: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -e "/var/lib/kubelet/mount-propagation-6908"/slave/file] Namespace:mount-propagation-6908 PodName:hostexec-node1-46jjg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 17 23:25:46.152: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:46.258: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/var/lib/kubelet/mount-propagation-6908"/host] Namespace:mount-propagation-6908 PodName:hostexec-node1-46jjg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 17 23:25:46.258: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:46.377: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/default] Namespace:mount-propagation-6908 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:46.377: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:46.480: INFO: Exec stderr: "" Jun 17 23:25:46.482: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/private] Namespace:mount-propagation-6908 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:46.482: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:46.580: INFO: Exec stderr: "" Jun 17 23:25:46.583: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/slave] Namespace:mount-propagation-6908 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:46.583: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:46.693: INFO: Exec stderr: "" Jun 17 23:25:46.697: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/master] Namespace:mount-propagation-6908 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 23:25:46.697: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:25:46.805: INFO: Exec stderr: "" Jun 17 23:25:46.805: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -rf "/var/lib/kubelet/mount-propagation-6908"] Namespace:mount-propagation-6908 PodName:hostexec-node1-46jjg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 17 23:25:46.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod hostexec-node1-46jjg in namespace mount-propagation-6908 [AfterEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:25:46.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "mount-propagation-6908" for this suite. • [SLOW TEST:26.436 seconds] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 ------------------------------ {"msg":"PASSED [sig-node] Mount propagation should propagate mounts to the host","total":-1,"completed":7,"skipped":785,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Jun 17 23:25:47.114: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:25:38.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Jun 17 23:25:38.576: INFO: Waiting up to 5m0s for pod "security-context-b57bb761-2c8d-4f78-b28a-97276ad771d1" in namespace "security-context-8453" to be "Succeeded or Failed" Jun 17 23:25:38.578: INFO: Pod "security-context-b57bb761-2c8d-4f78-b28a-97276ad771d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.558123ms Jun 17 23:25:40.582: INFO: Pod "security-context-b57bb761-2c8d-4f78-b28a-97276ad771d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006146385s Jun 17 23:25:42.589: INFO: Pod "security-context-b57bb761-2c8d-4f78-b28a-97276ad771d1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013208707s Jun 17 23:25:44.594: INFO: Pod "security-context-b57bb761-2c8d-4f78-b28a-97276ad771d1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.018363748s Jun 17 23:25:46.596: INFO: Pod "security-context-b57bb761-2c8d-4f78-b28a-97276ad771d1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020931191s Jun 17 23:25:48.600: INFO: Pod "security-context-b57bb761-2c8d-4f78-b28a-97276ad771d1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.024190208s Jun 17 23:25:50.604: INFO: Pod "security-context-b57bb761-2c8d-4f78-b28a-97276ad771d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.028207491s STEP: Saw pod success Jun 17 23:25:50.604: INFO: Pod "security-context-b57bb761-2c8d-4f78-b28a-97276ad771d1" satisfied condition "Succeeded or Failed" Jun 17 23:25:50.606: INFO: Trying to get logs from node node2 pod security-context-b57bb761-2c8d-4f78-b28a-97276ad771d1 container test-container: STEP: delete the pod Jun 17 23:25:50.617: INFO: Waiting for pod security-context-b57bb761-2c8d-4f78-b28a-97276ad771d1 to disappear Jun 17 23:25:50.619: INFO: Pod security-context-b57bb761-2c8d-4f78-b28a-97276ad771d1 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:25:50.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-8453" for this suite. • [SLOW TEST:12.084 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":5,"skipped":410,"failed":0} Jun 17 23:25:50.627: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:25:46.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Jun 17 23:25:46.075: INFO: Waiting up to 5m0s for pod "security-context-b06b4313-f7b9-4a16-982b-b64641e527ba" in namespace "security-context-1236" to be "Succeeded or Failed" Jun 17 23:25:46.078: INFO: Pod "security-context-b06b4313-f7b9-4a16-982b-b64641e527ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.444088ms Jun 17 23:25:48.082: INFO: Pod "security-context-b06b4313-f7b9-4a16-982b-b64641e527ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006697411s Jun 17 23:25:50.085: INFO: Pod "security-context-b06b4313-f7b9-4a16-982b-b64641e527ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009807634s Jun 17 23:25:52.089: INFO: Pod "security-context-b06b4313-f7b9-4a16-982b-b64641e527ba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013838156s Jun 17 23:25:54.093: INFO: Pod "security-context-b06b4313-f7b9-4a16-982b-b64641e527ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.017581145s STEP: Saw pod success Jun 17 23:25:54.093: INFO: Pod "security-context-b06b4313-f7b9-4a16-982b-b64641e527ba" satisfied condition "Succeeded or Failed" Jun 17 23:25:54.095: INFO: Trying to get logs from node node2 pod security-context-b06b4313-f7b9-4a16-982b-b64641e527ba container test-container: STEP: delete the pod Jun 17 23:25:54.107: INFO: Waiting for pod security-context-b06b4313-f7b9-4a16-982b-b64641e527ba to disappear Jun 17 23:25:54.109: INFO: Pod security-context-b06b4313-f7b9-4a16-982b-b64641e527ba no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:25:54.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-1236" for this suite. • [SLOW TEST:8.079 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":14,"skipped":1198,"failed":0} Jun 17 23:25:54.119: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:25:13.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237 STEP: Creating pod busybox-39e471dc-e161-4803-8891-7da6c666138f in namespace container-probe-5386 Jun 17 23:25:19.411: INFO: Started pod busybox-39e471dc-e161-4803-8891-7da6c666138f in namespace container-probe-5386 Jun 17 23:25:19.411: INFO: pod container-probe-5386/busybox-39e471dc-e161-4803-8891-7da6c666138f is not ready (1.636µs elapsed) Jun 17 23:25:21.413: INFO: pod container-probe-5386/busybox-39e471dc-e161-4803-8891-7da6c666138f is not ready (2.001940613s elapsed) Jun 17 23:25:23.416: INFO: pod container-probe-5386/busybox-39e471dc-e161-4803-8891-7da6c666138f is not ready (4.00427203s elapsed) Jun 17 23:25:25.416: INFO: pod container-probe-5386/busybox-39e471dc-e161-4803-8891-7da6c666138f is not ready (6.004892698s elapsed) Jun 17 23:25:27.419: INFO: pod container-probe-5386/busybox-39e471dc-e161-4803-8891-7da6c666138f is not ready (8.007240921s elapsed) Jun 17 23:25:29.420: INFO: pod container-probe-5386/busybox-39e471dc-e161-4803-8891-7da6c666138f is not ready (10.009004403s elapsed) Jun 17 23:25:31.421: INFO: pod container-probe-5386/busybox-39e471dc-e161-4803-8891-7da6c666138f is not ready (12.009471626s elapsed) Jun 17 23:25:33.423: INFO: pod container-probe-5386/busybox-39e471dc-e161-4803-8891-7da6c666138f is not ready (14.011190404s elapsed) Jun 17 23:25:35.424: INFO: pod container-probe-5386/busybox-39e471dc-e161-4803-8891-7da6c666138f is not ready (16.012491094s elapsed) Jun 17 23:25:37.425: INFO: pod container-probe-5386/busybox-39e471dc-e161-4803-8891-7da6c666138f is not ready (18.013807787s elapsed) Jun 17 23:25:39.427: INFO: pod container-probe-5386/busybox-39e471dc-e161-4803-8891-7da6c666138f is not ready (20.015651001s elapsed) Jun 17 23:25:41.429: INFO: pod container-probe-5386/busybox-39e471dc-e161-4803-8891-7da6c666138f is not ready (22.017716035s elapsed) Jun 17 23:25:43.434: INFO: pod container-probe-5386/busybox-39e471dc-e161-4803-8891-7da6c666138f is not ready (24.022803916s elapsed) Jun 17 23:25:45.436: INFO: pod container-probe-5386/busybox-39e471dc-e161-4803-8891-7da6c666138f is not ready (26.024321058s elapsed) Jun 17 23:25:47.438: INFO: pod container-probe-5386/busybox-39e471dc-e161-4803-8891-7da6c666138f is not ready (28.02619624s elapsed) Jun 17 23:25:49.438: INFO: pod container-probe-5386/busybox-39e471dc-e161-4803-8891-7da6c666138f is not ready (30.026699861s elapsed) Jun 17 23:25:51.439: INFO: pod container-probe-5386/busybox-39e471dc-e161-4803-8891-7da6c666138f is not ready (32.027452569s elapsed) Jun 17 23:25:53.444: INFO: pod container-probe-5386/busybox-39e471dc-e161-4803-8891-7da6c666138f is not ready (34.03270325s elapsed) Jun 17 23:25:55.445: INFO: pod container-probe-5386/busybox-39e471dc-e161-4803-8891-7da6c666138f is not ready (36.033496271s elapsed) Jun 17 23:25:57.449: INFO: pod container-probe-5386/busybox-39e471dc-e161-4803-8891-7da6c666138f is not ready (38.037720041s elapsed) Jun 17 23:25:59.452: INFO: pod container-probe-5386/busybox-39e471dc-e161-4803-8891-7da6c666138f is not ready (40.040728575s elapsed) Jun 17 23:26:01.454: INFO: pod container-probe-5386/busybox-39e471dc-e161-4803-8891-7da6c666138f is not ready (42.042773431s elapsed) Jun 17 23:26:03.460: INFO: pod container-probe-5386/busybox-39e471dc-e161-4803-8891-7da6c666138f is not ready (44.048706109s elapsed) Jun 17 23:26:05.461: INFO: pod container-probe-5386/busybox-39e471dc-e161-4803-8891-7da6c666138f is not ready (46.049899594s elapsed) Jun 17 23:26:07.463: INFO: pod container-probe-5386/busybox-39e471dc-e161-4803-8891-7da6c666138f is not ready (48.051133694s elapsed) Jun 17 23:26:09.466: INFO: pod container-probe-5386/busybox-39e471dc-e161-4803-8891-7da6c666138f is not ready (50.054251365s elapsed) Jun 17 23:26:11.471: INFO: pod container-probe-5386/busybox-39e471dc-e161-4803-8891-7da6c666138f is not ready (52.05925815s elapsed) Jun 17 23:26:13.471: INFO: pod container-probe-5386/busybox-39e471dc-e161-4803-8891-7da6c666138f is not ready (54.059403367s elapsed) Jun 17 23:26:15.472: INFO: pod container-probe-5386/busybox-39e471dc-e161-4803-8891-7da6c666138f is not ready (56.060393144s elapsed) Jun 17 23:26:17.478: INFO: pod container-probe-5386/busybox-39e471dc-e161-4803-8891-7da6c666138f is not ready (58.06647256s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:26:19.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5386" for this suite. • [SLOW TEST:66.122 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237 ------------------------------ {"msg":"PASSED [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":6,"skipped":497,"failed":0} Jun 17 23:26:19.495: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:24:38.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:202 [It] should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 STEP: creating pods that should always exit 1 and terminating the pod after a random delay Jun 17 23:24:42.514: INFO: watch delete seen for pod-submit-status-1-0 Jun 17 23:24:42.514: INFO: Pod pod-submit-status-1-0 on node node2 timings total=4.286670789s t=817ms run=0s execute=0s Jun 17 23:24:46.287: INFO: watch delete seen for pod-submit-status-2-0 Jun 17 23:24:46.287: INFO: Pod pod-submit-status-2-0 on node node1 timings total=8.059519339s t=1.044s run=0s execute=0s Jun 17 23:24:46.911: INFO: watch delete seen for pod-submit-status-0-0 Jun 17 23:24:46.911: INFO: Pod pod-submit-status-0-0 on node node2 timings total=8.683967341s t=148ms run=0s execute=0s Jun 17 23:24:48.910: INFO: watch delete seen for pod-submit-status-1-1 Jun 17 23:24:48.910: INFO: Pod pod-submit-status-1-1 on node node2 timings total=6.395684136s t=1.188s run=0s execute=0s Jun 17 23:24:53.910: INFO: watch delete seen for pod-submit-status-2-1 Jun 17 23:24:53.910: INFO: Pod pod-submit-status-2-1 on node node2 timings total=7.62295881s t=1.076s run=0s execute=0s Jun 17 23:24:55.112: INFO: watch delete seen for pod-submit-status-0-1 Jun 17 23:24:55.112: INFO: Pod pod-submit-status-0-1 on node node2 timings total=8.200964498s t=1.386s run=0s execute=0s Jun 17 23:24:55.511: INFO: watch delete seen for pod-submit-status-1-2 Jun 17 23:24:55.511: INFO: Pod pod-submit-status-1-2 on node node2 timings total=6.601523267s t=1.018s run=0s execute=0s Jun 17 23:25:01.312: INFO: watch delete seen for pod-submit-status-2-2 Jun 17 23:25:01.312: INFO: Pod pod-submit-status-2-2 on node node2 timings total=7.40217118s t=363ms run=0s execute=0s Jun 17 23:25:04.959: INFO: watch delete seen for pod-submit-status-2-3 Jun 17 23:25:04.959: INFO: Pod pod-submit-status-2-3 on node node1 timings total=3.647028029s t=1.283s run=0s execute=0s Jun 17 23:25:08.390: INFO: watch delete seen for pod-submit-status-1-3 Jun 17 23:25:08.390: INFO: Pod pod-submit-status-1-3 on node node2 timings total=12.87866178s t=772ms run=0s execute=0s Jun 17 23:25:19.327: INFO: watch delete seen for pod-submit-status-2-4 Jun 17 23:25:19.327: INFO: Pod pod-submit-status-2-4 on node node1 timings total=14.367694446s t=645ms run=0s execute=0s Jun 17 23:25:19.337: INFO: watch delete seen for pod-submit-status-1-4 Jun 17 23:25:19.337: INFO: Pod pod-submit-status-1-4 on node node1 timings total=10.94695771s t=808ms run=0s execute=0s Jun 17 23:25:22.055: INFO: watch delete seen for pod-submit-status-2-5 Jun 17 23:25:22.055: INFO: Pod pod-submit-status-2-5 on node node1 timings total=2.728216822s t=445ms run=0s execute=0s Jun 17 23:25:23.098: INFO: watch delete seen for pod-submit-status-0-2 Jun 17 23:25:23.098: INFO: Pod pod-submit-status-0-2 on node node2 timings total=27.985851272s t=1.643s run=0s execute=0s Jun 17 23:25:28.430: INFO: watch delete seen for pod-submit-status-2-6 Jun 17 23:25:28.430: INFO: Pod pod-submit-status-2-6 on node node2 timings total=6.374868071s t=1.51s run=0s execute=0s Jun 17 23:25:29.324: INFO: watch delete seen for pod-submit-status-1-5 Jun 17 23:25:29.324: INFO: Pod pod-submit-status-1-5 on node node1 timings total=9.986888166s t=1.492s run=0s execute=0s Jun 17 23:25:38.380: INFO: watch delete seen for pod-submit-status-2-7 Jun 17 23:25:38.380: INFO: Pod pod-submit-status-2-7 on node node2 timings total=9.950127441s t=279ms run=0s execute=0s Jun 17 23:25:38.390: INFO: watch delete seen for pod-submit-status-1-6 Jun 17 23:25:38.390: INFO: Pod pod-submit-status-1-6 on node node2 timings total=9.065591137s t=948ms run=0s execute=0s Jun 17 23:25:38.407: INFO: watch delete seen for pod-submit-status-0-3 Jun 17 23:25:38.407: INFO: Pod pod-submit-status-0-3 on node node2 timings total=15.30853401s t=536ms run=0s execute=0s Jun 17 23:25:41.842: INFO: watch delete seen for pod-submit-status-1-7 Jun 17 23:25:41.842: INFO: Pod pod-submit-status-1-7 on node node2 timings total=3.451907552s t=1s run=0s execute=0s Jun 17 23:25:42.409: INFO: watch delete seen for pod-submit-status-0-4 Jun 17 23:25:42.410: INFO: Pod pod-submit-status-0-4 on node node2 timings total=4.002645469s t=1.141s run=0s execute=0s Jun 17 23:25:42.811: INFO: watch delete seen for pod-submit-status-2-8 Jun 17 23:25:42.811: INFO: Pod pod-submit-status-2-8 on node node2 timings total=4.430481652s t=1.63s run=0s execute=0s Jun 17 23:25:47.808: INFO: watch delete seen for pod-submit-status-0-5 Jun 17 23:25:47.809: INFO: Pod pod-submit-status-0-5 on node node2 timings total=5.398952485s t=239ms run=0s execute=0s Jun 17 23:25:58.377: INFO: watch delete seen for pod-submit-status-2-9 Jun 17 23:25:58.377: INFO: Pod pod-submit-status-2-9 on node node2 timings total=15.565997061s t=954ms run=0s execute=0s Jun 17 23:25:58.388: INFO: watch delete seen for pod-submit-status-1-8 Jun 17 23:25:58.388: INFO: Pod pod-submit-status-1-8 on node node2 timings total=16.545901459s t=1.084s run=0s execute=0s Jun 17 23:25:58.403: INFO: watch delete seen for pod-submit-status-0-6 Jun 17 23:25:58.404: INFO: Pod pod-submit-status-0-6 on node node2 timings total=10.594898728s t=1.338s run=0s execute=0s Jun 17 23:26:08.379: INFO: watch delete seen for pod-submit-status-2-10 Jun 17 23:26:08.379: INFO: Pod pod-submit-status-2-10 on node node2 timings total=10.001904845s t=1.576s run=0s execute=0s Jun 17 23:26:08.388: INFO: watch delete seen for pod-submit-status-1-9 Jun 17 23:26:08.388: INFO: Pod pod-submit-status-1-9 on node node2 timings total=9.999639936s t=833ms run=0s execute=0s Jun 17 23:26:11.214: INFO: watch delete seen for pod-submit-status-1-10 Jun 17 23:26:11.214: INFO: Pod pod-submit-status-1-10 on node node2 timings total=2.826185355s t=1.19s run=0s execute=0s Jun 17 23:26:18.382: INFO: watch delete seen for pod-submit-status-2-11 Jun 17 23:26:18.382: INFO: Pod pod-submit-status-2-11 on node node2 timings total=10.002724038s t=1.73s run=0s execute=0s Jun 17 23:26:19.322: INFO: watch delete seen for pod-submit-status-1-11 Jun 17 23:26:19.322: INFO: Pod pod-submit-status-1-11 on node node1 timings total=8.108177035s t=449ms run=0s execute=0s Jun 17 23:26:22.454: INFO: watch delete seen for pod-submit-status-0-7 Jun 17 23:26:22.454: INFO: Pod pod-submit-status-0-7 on node node2 timings total=24.050761412s t=1.037s run=0s execute=0s Jun 17 23:26:29.335: INFO: watch delete seen for pod-submit-status-2-12 Jun 17 23:26:29.335: INFO: Pod pod-submit-status-2-12 on node node1 timings total=10.953670209s t=1.038s run=2s execute=0s Jun 17 23:26:29.349: INFO: watch delete seen for pod-submit-status-1-12 Jun 17 23:26:29.349: INFO: Pod pod-submit-status-1-12 on node node1 timings total=10.026731836s t=1.615s run=0s execute=0s Jun 17 23:26:31.080: INFO: watch delete seen for pod-submit-status-2-13 Jun 17 23:26:31.080: INFO: Pod pod-submit-status-2-13 on node node2 timings total=1.744360849s t=563ms run=0s execute=0s Jun 17 23:26:38.389: INFO: watch delete seen for pod-submit-status-2-14 Jun 17 23:26:38.389: INFO: Pod pod-submit-status-2-14 on node node2 timings total=7.308691576s t=591ms run=0s execute=0s Jun 17 23:26:38.413: INFO: watch delete seen for pod-submit-status-0-8 Jun 17 23:26:38.413: INFO: Pod pod-submit-status-0-8 on node node2 timings total=15.959072374s t=1.263s run=0s execute=0s Jun 17 23:26:38.424: INFO: watch delete seen for pod-submit-status-1-13 Jun 17 23:26:38.425: INFO: Pod pod-submit-status-1-13 on node node2 timings total=9.075365996s t=793ms run=0s execute=0s Jun 17 23:26:48.379: INFO: watch delete seen for pod-submit-status-1-14 Jun 17 23:26:48.379: INFO: Pod pod-submit-status-1-14 on node node2 timings total=9.954446718s t=757ms run=0s execute=0s Jun 17 23:26:48.387: INFO: watch delete seen for pod-submit-status-0-9 Jun 17 23:26:48.387: INFO: Pod pod-submit-status-0-9 on node node2 timings total=9.973708553s t=1.472s run=0s execute=0s Jun 17 23:26:58.376: INFO: watch delete seen for pod-submit-status-0-10 Jun 17 23:26:58.376: INFO: Pod pod-submit-status-0-10 on node node2 timings total=9.988692125s t=1.199s run=0s execute=0s Jun 17 23:27:08.374: INFO: watch delete seen for pod-submit-status-0-11 Jun 17 23:27:08.374: INFO: Pod pod-submit-status-0-11 on node node2 timings total=9.997452554s t=1.455s run=2s execute=0s Jun 17 23:27:18.378: INFO: watch delete seen for pod-submit-status-0-12 Jun 17 23:27:18.378: INFO: Pod pod-submit-status-0-12 on node node2 timings total=10.004518014s t=917ms run=0s execute=0s Jun 17 23:27:28.376: INFO: watch delete seen for pod-submit-status-0-13 Jun 17 23:27:28.376: INFO: Pod pod-submit-status-0-13 on node node2 timings total=9.997736039s t=238ms run=0s execute=0s Jun 17 23:27:31.491: INFO: watch delete seen for pod-submit-status-0-14 Jun 17 23:27:31.492: INFO: Pod pod-submit-status-0-14 on node node2 timings total=3.115494435s t=815ms run=0s execute=0s [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:27:31.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1644" for this suite. • [SLOW TEST:173.294 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:200 should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pod Container Status should never report success for a pending container","total":-1,"completed":3,"skipped":842,"failed":0} Jun 17 23:27:31.504: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:23:40.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W0617 23:23:40.511740 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 17 23:23:40.511: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 17 23:23:40.513: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:342 STEP: Creating pod startup-150b5606-7420-44e9-b960-9ecac3bac981 in namespace container-probe-5988 Jun 17 23:23:52.532: INFO: Started pod startup-150b5606-7420-44e9-b960-9ecac3bac981 in namespace container-probe-5988 STEP: checking the pod's current state and verifying that restartCount is present Jun 17 23:23:52.534: INFO: Initial restart count of pod startup-150b5606-7420-44e9-b960-9ecac3bac981 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:27:53.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5988" for this suite. • [SLOW TEST:252.639 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:342 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it","total":-1,"completed":1,"skipped":93,"failed":0} Jun 17 23:27:53.120: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:24:03.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 STEP: Creating pod liveness-829dff7c-0ad1-4a16-b131-69ac6d41f085 in namespace container-probe-1988 Jun 17 23:24:07.972: INFO: Started pod liveness-829dff7c-0ad1-4a16-b131-69ac6d41f085 in namespace container-probe-1988 STEP: checking the pod's current state and verifying that restartCount is present Jun 17 23:24:07.975: INFO: Initial restart count of pod liveness-829dff7c-0ad1-4a16-b131-69ac6d41f085 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:28:08.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1988" for this suite. • [SLOW TEST:244.610 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 ------------------------------ [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:25:27.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 STEP: wait until node is ready Jun 17 23:25:27.956: INFO: Waiting up to 5m0s for node node2 condition Ready to be true STEP: wait until there is node lease STEP: verify NodeStatus report period is longer than lease duration Jun 17 23:25:28.968: INFO: node status heartbeat is unchanged for 1.004006709s, waiting for 1m20s Jun 17 23:25:29.970: INFO: node status heartbeat is unchanged for 2.00678894s, waiting for 1m20s Jun 17 23:25:30.970: INFO: node status heartbeat is unchanged for 3.006501445s, waiting for 1m20s Jun 17 23:25:31.970: INFO: node status heartbeat is unchanged for 4.006632006s, waiting for 1m20s Jun 17 23:25:32.969: INFO: node status heartbeat is unchanged for 5.0053518s, waiting for 1m20s Jun 17 23:25:33.968: INFO: node status heartbeat is unchanged for 6.004584855s, waiting for 1m20s Jun 17 23:25:34.971: INFO: node status heartbeat is unchanged for 7.006915538s, waiting for 1m20s Jun 17 23:25:35.969: INFO: node status heartbeat is unchanged for 8.004877542s, waiting for 1m20s Jun 17 23:25:36.970: INFO: node status heartbeat is unchanged for 9.006470364s, waiting for 1m20s Jun 17 23:25:37.971: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 17 23:25:37.977: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:25:27 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:25:37 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:25:27 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:25:37 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:25:27 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:25:37 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-17 20:04:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 17 23:25:38.968: INFO: node status heartbeat is unchanged for 996.92042ms, waiting for 1m20s Jun 17 23:25:39.968: INFO: node status heartbeat is unchanged for 1.996594806s, waiting for 1m20s Jun 17 23:25:40.968: INFO: node status heartbeat is unchanged for 2.997111693s, waiting for 1m20s Jun 17 23:25:41.971: INFO: node status heartbeat is unchanged for 4.000203162s, waiting for 1m20s Jun 17 23:25:42.971: INFO: node status heartbeat is unchanged for 4.999640515s, waiting for 1m20s Jun 17 23:25:43.967: INFO: node status heartbeat is unchanged for 5.996078803s, waiting for 1m20s Jun 17 23:25:44.970: INFO: node status heartbeat is unchanged for 6.999462747s, waiting for 1m20s Jun 17 23:25:45.967: INFO: node status heartbeat is unchanged for 7.996129724s, waiting for 1m20s Jun 17 23:25:46.970: INFO: node status heartbeat is unchanged for 8.998990449s, waiting for 1m20s Jun 17 23:25:47.971: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 17 23:25:47.976: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:25:37 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:25:47 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:25:37 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:25:47 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:25:37 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:25:47 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-17 20:04:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    NodeInfo: {MachineID: "3b9e31fbb30d4e48b9ac063755a76deb", SystemUUID: "80B3CD56-852F-E711-906E-0017A4403562", BootID: "5cd4c1a7-c6ca-496c-9122-4f944da708e6", KernelVersion: "3.10.0-1160.66.1.el7.x86_64", ...},    Images: []v1.ContainerImage{    ... // 30 identical elements    {Names: {"k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf"..., "k8s.gcr.io/e2e-test-images/nonewprivs:1.3"}, SizeBytes: 7107254},    {Names: {"appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172"..., "appropriate/curl:edge"}, SizeBytes: 5654234}, +  { +  Names: []string{ +  "gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c6"..., +  "gcr.io/authenticated-image-pulling/alpine:3.7", +  }, +  SizeBytes: 4206620, +  },    {Names: {"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad"..., "k8s.gcr.io/e2e-test-images/busybox:1.29-1"}, SizeBytes: 1154361},    {Names: {"busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea"..., "busybox:1.28"}, SizeBytes: 1146369},    ... // 2 identical elements    },    VolumesInUse: nil,    VolumesAttached: nil,    Config: nil,   } Jun 17 23:25:48.968: INFO: node status heartbeat is unchanged for 996.937966ms, waiting for 1m20s Jun 17 23:25:49.969: INFO: node status heartbeat is unchanged for 1.997889053s, waiting for 1m20s Jun 17 23:25:50.969: INFO: node status heartbeat is unchanged for 2.997240361s, waiting for 1m20s Jun 17 23:25:51.967: INFO: node status heartbeat is unchanged for 3.995339893s, waiting for 1m20s Jun 17 23:25:52.969: INFO: node status heartbeat is unchanged for 4.997230282s, waiting for 1m20s Jun 17 23:25:53.968: INFO: node status heartbeat is unchanged for 5.996419683s, waiting for 1m20s Jun 17 23:25:54.968: INFO: node status heartbeat is unchanged for 6.996577324s, waiting for 1m20s Jun 17 23:25:55.967: INFO: node status heartbeat is unchanged for 7.995836293s, waiting for 1m20s Jun 17 23:25:56.967: INFO: node status heartbeat is unchanged for 8.99588261s, waiting for 1m20s Jun 17 23:25:57.968: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 17 23:25:57.973: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:25:47 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:25:57 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:25:47 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:25:57 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:25:47 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:25:57 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-17 20:04:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 17 23:25:58.966: INFO: node status heartbeat is unchanged for 998.624209ms, waiting for 1m20s Jun 17 23:25:59.968: INFO: node status heartbeat is unchanged for 2.000373171s, waiting for 1m20s Jun 17 23:26:00.968: INFO: node status heartbeat is unchanged for 2.999905089s, waiting for 1m20s Jun 17 23:26:01.968: INFO: node status heartbeat is unchanged for 3.999765519s, waiting for 1m20s Jun 17 23:26:02.970: INFO: node status heartbeat is unchanged for 5.002433356s, waiting for 1m20s Jun 17 23:26:03.968: INFO: node status heartbeat is unchanged for 6.000507121s, waiting for 1m20s Jun 17 23:26:04.970: INFO: node status heartbeat is unchanged for 7.002111255s, waiting for 1m20s Jun 17 23:26:05.968: INFO: node status heartbeat is unchanged for 8.000360161s, waiting for 1m20s Jun 17 23:26:06.968: INFO: node status heartbeat is unchanged for 8.999950018s, waiting for 1m20s Jun 17 23:26:07.969: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 17 23:26:07.973: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:25:57 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:26:07 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:25:57 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:26:07 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:25:57 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:26:07 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-17 20:04:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 17 23:26:08.967: INFO: node status heartbeat is unchanged for 998.558035ms, waiting for 1m20s Jun 17 23:26:09.967: INFO: node status heartbeat is unchanged for 1.998567238s, waiting for 1m20s Jun 17 23:26:10.968: INFO: node status heartbeat is unchanged for 2.99899887s, waiting for 1m20s Jun 17 23:26:11.969: INFO: node status heartbeat is unchanged for 4.000063642s, waiting for 1m20s Jun 17 23:26:12.968: INFO: node status heartbeat is unchanged for 4.999623667s, waiting for 1m20s Jun 17 23:26:13.969: INFO: node status heartbeat is unchanged for 6.000229909s, waiting for 1m20s Jun 17 23:26:14.971: INFO: node status heartbeat is unchanged for 7.001907804s, waiting for 1m20s Jun 17 23:26:15.970: INFO: node status heartbeat is unchanged for 8.001077014s, waiting for 1m20s Jun 17 23:26:16.970: INFO: node status heartbeat is unchanged for 9.000952919s, waiting for 1m20s Jun 17 23:26:17.968: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 17 23:26:17.973: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:26:07 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:26:17 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:26:07 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:26:17 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:26:07 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:26:17 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-17 20:04:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 17 23:26:18.969: INFO: node status heartbeat is unchanged for 1.001027151s, waiting for 1m20s Jun 17 23:26:19.967: INFO: node status heartbeat is unchanged for 1.998719895s, waiting for 1m20s Jun 17 23:26:20.967: INFO: node status heartbeat is unchanged for 2.998621954s, waiting for 1m20s Jun 17 23:26:21.969: INFO: node status heartbeat is unchanged for 4.000393915s, waiting for 1m20s Jun 17 23:26:22.968: INFO: node status heartbeat is unchanged for 4.999435387s, waiting for 1m20s Jun 17 23:26:23.967: INFO: node status heartbeat is unchanged for 5.998860699s, waiting for 1m20s Jun 17 23:26:24.970: INFO: node status heartbeat is unchanged for 7.001676578s, waiting for 1m20s Jun 17 23:26:25.969: INFO: node status heartbeat is unchanged for 8.000470144s, waiting for 1m20s Jun 17 23:26:26.968: INFO: node status heartbeat is unchanged for 9.000090577s, waiting for 1m20s Jun 17 23:26:27.968: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 17 23:26:27.973: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:26:17 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:26:27 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:26:17 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:26:27 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:26:17 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:26:27 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-17 20:04:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 17 23:26:28.968: INFO: node status heartbeat is unchanged for 1.000273157s, waiting for 1m20s Jun 17 23:26:29.968: INFO: node status heartbeat is unchanged for 2.00041086s, waiting for 1m20s Jun 17 23:26:30.967: INFO: node status heartbeat is unchanged for 2.999388776s, waiting for 1m20s Jun 17 23:26:31.968: INFO: node status heartbeat is unchanged for 4.000124171s, waiting for 1m20s Jun 17 23:26:32.967: INFO: node status heartbeat is unchanged for 4.9995319s, waiting for 1m20s Jun 17 23:26:33.968: INFO: node status heartbeat is unchanged for 6.000506141s, waiting for 1m20s Jun 17 23:26:34.970: INFO: node status heartbeat is unchanged for 7.001882333s, waiting for 1m20s Jun 17 23:26:35.971: INFO: node status heartbeat is unchanged for 8.00266683s, waiting for 1m20s Jun 17 23:26:36.970: INFO: node status heartbeat is unchanged for 9.001899823s, waiting for 1m20s Jun 17 23:26:37.969: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 17 23:26:37.974: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:26:27 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:26:37 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:26:27 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:26:37 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:26:27 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:26:37 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-17 20:04:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 17 23:26:38.968: INFO: node status heartbeat is unchanged for 998.503181ms, waiting for 1m20s Jun 17 23:26:39.971: INFO: node status heartbeat is unchanged for 2.001197549s, waiting for 1m20s Jun 17 23:26:40.970: INFO: node status heartbeat is unchanged for 3.000156784s, waiting for 1m20s Jun 17 23:26:41.968: INFO: node status heartbeat is unchanged for 3.998793146s, waiting for 1m20s Jun 17 23:26:42.970: INFO: node status heartbeat is unchanged for 5.000412653s, waiting for 1m20s Jun 17 23:26:43.969: INFO: node status heartbeat is unchanged for 5.999758608s, waiting for 1m20s Jun 17 23:26:44.969: INFO: node status heartbeat is unchanged for 6.999864701s, waiting for 1m20s Jun 17 23:26:45.968: INFO: node status heartbeat is unchanged for 7.999099337s, waiting for 1m20s Jun 17 23:26:46.970: INFO: node status heartbeat is unchanged for 9.000701951s, waiting for 1m20s Jun 17 23:26:47.971: INFO: node status heartbeat is unchanged for 10.001874628s, waiting for 1m20s Jun 17 23:26:48.970: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s Jun 17 23:26:48.974: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:26:37 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:26:48 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:26:37 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:26:48 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:26:37 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:26:48 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-17 20:04:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 17 23:26:49.970: INFO: node status heartbeat is unchanged for 1.000621435s, waiting for 1m20s Jun 17 23:26:50.971: INFO: node status heartbeat is unchanged for 2.001036354s, waiting for 1m20s Jun 17 23:26:51.968: INFO: node status heartbeat is unchanged for 2.998452449s, waiting for 1m20s Jun 17 23:26:52.971: INFO: node status heartbeat is unchanged for 4.001552474s, waiting for 1m20s Jun 17 23:26:53.968: INFO: node status heartbeat is unchanged for 4.998725505s, waiting for 1m20s Jun 17 23:26:54.970: INFO: node status heartbeat is unchanged for 6.000760627s, waiting for 1m20s Jun 17 23:26:55.969: INFO: node status heartbeat is unchanged for 6.999552311s, waiting for 1m20s Jun 17 23:26:56.970: INFO: node status heartbeat is unchanged for 8.000206602s, waiting for 1m20s Jun 17 23:26:57.967: INFO: node status heartbeat is unchanged for 8.997780996s, waiting for 1m20s Jun 17 23:26:58.968: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 17 23:26:58.973: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:26:48 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:26:58 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:26:48 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:26:58 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:26:48 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:26:58 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-17 20:04:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 17 23:26:59.970: INFO: node status heartbeat is unchanged for 1.00130413s, waiting for 1m20s Jun 17 23:27:00.969: INFO: node status heartbeat is unchanged for 2.000828863s, waiting for 1m20s Jun 17 23:27:01.968: INFO: node status heartbeat is unchanged for 2.999261236s, waiting for 1m20s Jun 17 23:27:02.970: INFO: node status heartbeat is unchanged for 4.00122256s, waiting for 1m20s Jun 17 23:27:03.968: INFO: node status heartbeat is unchanged for 4.999936679s, waiting for 1m20s Jun 17 23:27:04.970: INFO: node status heartbeat is unchanged for 6.001258068s, waiting for 1m20s Jun 17 23:27:05.967: INFO: node status heartbeat is unchanged for 6.998990222s, waiting for 1m20s Jun 17 23:27:06.970: INFO: node status heartbeat is unchanged for 8.002006058s, waiting for 1m20s Jun 17 23:27:07.968: INFO: node status heartbeat is unchanged for 8.999667379s, waiting for 1m20s Jun 17 23:27:08.968: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 17 23:27:08.973: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:26:58 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:27:08 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:26:58 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:27:08 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:26:58 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:27:08 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-17 20:04:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 17 23:27:09.967: INFO: node status heartbeat is unchanged for 998.737514ms, waiting for 1m20s Jun 17 23:27:10.968: INFO: node status heartbeat is unchanged for 1.999377614s, waiting for 1m20s Jun 17 23:27:11.968: INFO: node status heartbeat is unchanged for 2.999606334s, waiting for 1m20s Jun 17 23:27:12.968: INFO: node status heartbeat is unchanged for 3.999503797s, waiting for 1m20s Jun 17 23:27:13.968: INFO: node status heartbeat is unchanged for 4.999640165s, waiting for 1m20s Jun 17 23:27:14.971: INFO: node status heartbeat is unchanged for 6.003009114s, waiting for 1m20s Jun 17 23:27:15.968: INFO: node status heartbeat is unchanged for 6.999726569s, waiting for 1m20s Jun 17 23:27:16.968: INFO: node status heartbeat is unchanged for 8.00030128s, waiting for 1m20s Jun 17 23:27:17.968: INFO: node status heartbeat is unchanged for 8.999822046s, waiting for 1m20s Jun 17 23:27:18.970: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 17 23:27:18.974: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:27:08 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:27:18 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:27:08 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:27:18 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:27:08 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:27:18 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-17 20:04:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 17 23:27:19.970: INFO: node status heartbeat is unchanged for 1.000287799s, waiting for 1m20s Jun 17 23:27:20.967: INFO: node status heartbeat is unchanged for 1.997551254s, waiting for 1m20s Jun 17 23:27:21.970: INFO: node status heartbeat is unchanged for 3.000335498s, waiting for 1m20s Jun 17 23:27:22.969: INFO: node status heartbeat is unchanged for 3.99900343s, waiting for 1m20s Jun 17 23:27:23.968: INFO: node status heartbeat is unchanged for 4.998212704s, waiting for 1m20s Jun 17 23:27:24.970: INFO: node status heartbeat is unchanged for 6.000057998s, waiting for 1m20s Jun 17 23:27:25.969: INFO: node status heartbeat is unchanged for 6.999858572s, waiting for 1m20s Jun 17 23:27:26.968: INFO: node status heartbeat is unchanged for 7.998199426s, waiting for 1m20s Jun 17 23:27:27.969: INFO: node status heartbeat is unchanged for 8.999147385s, waiting for 1m20s Jun 17 23:27:28.968: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 17 23:27:28.972: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:27:18 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:27:28 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:27:18 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:27:28 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:27:18 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:27:28 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-17 20:04:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 17 23:27:29.969: INFO: node status heartbeat is unchanged for 1.001063428s, waiting for 1m20s Jun 17 23:27:30.968: INFO: node status heartbeat is unchanged for 2.000868056s, waiting for 1m20s Jun 17 23:27:31.968: INFO: node status heartbeat is unchanged for 3.000332517s, waiting for 1m20s Jun 17 23:27:32.967: INFO: node status heartbeat is unchanged for 3.999757622s, waiting for 1m20s Jun 17 23:27:33.968: INFO: node status heartbeat is unchanged for 5.000317086s, waiting for 1m20s Jun 17 23:27:34.970: INFO: node status heartbeat is unchanged for 6.002259551s, waiting for 1m20s Jun 17 23:27:35.969: INFO: node status heartbeat is unchanged for 7.001340068s, waiting for 1m20s Jun 17 23:27:36.968: INFO: node status heartbeat is unchanged for 8.000801048s, waiting for 1m20s Jun 17 23:27:37.968: INFO: node status heartbeat is unchanged for 9.000439944s, waiting for 1m20s Jun 17 23:27:38.968: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 17 23:27:38.973: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:27:28 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:27:38 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:27:28 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:27:38 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:27:28 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:27:38 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-17 20:04:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 17 23:27:39.970: INFO: node status heartbeat is unchanged for 1.001509438s, waiting for 1m20s Jun 17 23:27:40.969: INFO: node status heartbeat is unchanged for 2.000585536s, waiting for 1m20s Jun 17 23:27:41.970: INFO: node status heartbeat is unchanged for 3.001503056s, waiting for 1m20s Jun 17 23:27:42.972: INFO: node status heartbeat is unchanged for 4.003207369s, waiting for 1m20s Jun 17 23:27:43.969: INFO: node status heartbeat is unchanged for 5.000366079s, waiting for 1m20s Jun 17 23:27:44.968: INFO: node status heartbeat is unchanged for 6.000125764s, waiting for 1m20s Jun 17 23:27:45.968: INFO: node status heartbeat is unchanged for 6.999598104s, waiting for 1m20s Jun 17 23:27:46.971: INFO: node status heartbeat is unchanged for 8.002456935s, waiting for 1m20s Jun 17 23:27:47.970: INFO: node status heartbeat is unchanged for 9.001627657s, waiting for 1m20s Jun 17 23:27:48.969: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 17 23:27:48.974: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:27:38 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:27:48 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:27:38 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:27:48 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:27:38 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:27:48 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-17 20:04:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 17 23:27:49.968: INFO: node status heartbeat is unchanged for 999.454496ms, waiting for 1m20s Jun 17 23:27:50.969: INFO: node status heartbeat is unchanged for 1.999656495s, waiting for 1m20s Jun 17 23:27:51.969: INFO: node status heartbeat is unchanged for 2.999849672s, waiting for 1m20s Jun 17 23:27:52.968: INFO: node status heartbeat is unchanged for 3.998725927s, waiting for 1m20s Jun 17 23:27:53.969: INFO: node status heartbeat is unchanged for 5.000466323s, waiting for 1m20s Jun 17 23:27:54.970: INFO: node status heartbeat is unchanged for 6.000923553s, waiting for 1m20s Jun 17 23:27:55.969: INFO: node status heartbeat is unchanged for 6.999574289s, waiting for 1m20s Jun 17 23:27:56.969: INFO: node status heartbeat is unchanged for 7.999804911s, waiting for 1m20s Jun 17 23:27:57.969: INFO: node status heartbeat is unchanged for 9.000255824s, waiting for 1m20s Jun 17 23:27:58.968: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 17 23:27:58.972: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:27:48 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:27:58 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:27:48 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:27:58 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:27:48 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:27:58 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-17 20:04:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 17 23:27:59.971: INFO: node status heartbeat is unchanged for 1.002668925s, waiting for 1m20s Jun 17 23:28:00.969: INFO: node status heartbeat is unchanged for 2.000979707s, waiting for 1m20s Jun 17 23:28:01.970: INFO: node status heartbeat is unchanged for 3.002102922s, waiting for 1m20s Jun 17 23:28:02.970: INFO: node status heartbeat is unchanged for 4.002023095s, waiting for 1m20s Jun 17 23:28:03.969: INFO: node status heartbeat is unchanged for 5.001292171s, waiting for 1m20s Jun 17 23:28:04.969: INFO: node status heartbeat is unchanged for 6.000960063s, waiting for 1m20s Jun 17 23:28:05.970: INFO: node status heartbeat is unchanged for 7.002002566s, waiting for 1m20s Jun 17 23:28:06.968: INFO: node status heartbeat is unchanged for 8.00059341s, waiting for 1m20s Jun 17 23:28:07.970: INFO: node status heartbeat is unchanged for 9.002272739s, waiting for 1m20s Jun 17 23:28:08.968: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 17 23:28:08.973: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:27:58 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:28:08 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:27:58 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:28:08 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:27:58 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:28:08 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-17 20:04:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 17 23:28:09.969: INFO: node status heartbeat is unchanged for 1.00098469s, waiting for 1m20s Jun 17 23:28:10.969: INFO: node status heartbeat is unchanged for 2.000958533s, waiting for 1m20s Jun 17 23:28:11.969: INFO: node status heartbeat is unchanged for 3.000319186s, waiting for 1m20s Jun 17 23:28:12.968: INFO: node status heartbeat is unchanged for 4.000144196s, waiting for 1m20s Jun 17 23:28:13.969: INFO: node status heartbeat is unchanged for 5.001146292s, waiting for 1m20s Jun 17 23:28:14.971: INFO: node status heartbeat is unchanged for 6.002399474s, waiting for 1m20s Jun 17 23:28:15.969: INFO: node status heartbeat is unchanged for 7.001252918s, waiting for 1m20s Jun 17 23:28:16.971: INFO: node status heartbeat is unchanged for 8.002618116s, waiting for 1m20s Jun 17 23:28:17.969: INFO: node status heartbeat is unchanged for 9.000523526s, waiting for 1m20s Jun 17 23:28:18.969: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 17 23:28:18.973: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:28:08 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:28:18 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:28:08 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:28:18 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:28:08 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:28:18 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-17 20:04:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 17 23:28:19.969: INFO: node status heartbeat is unchanged for 1.000102995s, waiting for 1m20s Jun 17 23:28:20.968: INFO: node status heartbeat is unchanged for 1.998996462s, waiting for 1m20s Jun 17 23:28:21.969: INFO: node status heartbeat is unchanged for 3.000546248s, waiting for 1m20s Jun 17 23:28:22.969: INFO: node status heartbeat is unchanged for 4.000125924s, waiting for 1m20s Jun 17 23:28:23.968: INFO: node status heartbeat is unchanged for 4.99971059s, waiting for 1m20s Jun 17 23:28:24.969: INFO: node status heartbeat is unchanged for 5.999871672s, waiting for 1m20s Jun 17 23:28:25.970: INFO: node status heartbeat is unchanged for 7.001310997s, waiting for 1m20s Jun 17 23:28:26.969: INFO: node status heartbeat is unchanged for 8.000392424s, waiting for 1m20s Jun 17 23:28:27.969: INFO: node status heartbeat is unchanged for 9.000414518s, waiting for 1m20s Jun 17 23:28:28.969: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 17 23:28:28.973: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:28:18 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:28:28 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:28:18 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:28:28 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:28:18 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:28:28 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-17 20:04:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 17 23:28:29.968: INFO: node status heartbeat is unchanged for 999.322406ms, waiting for 1m20s Jun 17 23:28:30.967: INFO: node status heartbeat is unchanged for 1.999005149s, waiting for 1m20s Jun 17 23:28:31.968: INFO: node status heartbeat is unchanged for 2.999123544s, waiting for 1m20s Jun 17 23:28:32.969: INFO: node status heartbeat is unchanged for 4.000022812s, waiting for 1m20s Jun 17 23:28:33.968: INFO: node status heartbeat is unchanged for 4.999871101s, waiting for 1m20s Jun 17 23:28:34.970: INFO: node status heartbeat is unchanged for 6.001343181s, waiting for 1m20s Jun 17 23:28:35.970: INFO: node status heartbeat is unchanged for 7.001960008s, waiting for 1m20s Jun 17 23:28:36.970: INFO: node status heartbeat is unchanged for 8.001153639s, waiting for 1m20s Jun 17 23:28:37.970: INFO: node status heartbeat is unchanged for 9.001108896s, waiting for 1m20s Jun 17 23:28:38.968: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 17 23:28:38.972: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:28:28 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:28:38 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:28:28 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:28:38 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:28:28 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:28:38 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-17 20:04:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 17 23:28:39.970: INFO: node status heartbeat is unchanged for 1.002007756s, waiting for 1m20s Jun 17 23:28:40.968: INFO: node status heartbeat is unchanged for 2.000969101s, waiting for 1m20s Jun 17 23:28:41.968: INFO: node status heartbeat is unchanged for 3.000245172s, waiting for 1m20s Jun 17 23:28:42.969: INFO: node status heartbeat is unchanged for 4.001143369s, waiting for 1m20s Jun 17 23:28:43.968: INFO: node status heartbeat is unchanged for 5.000557638s, waiting for 1m20s Jun 17 23:28:44.969: INFO: node status heartbeat is unchanged for 6.001225577s, waiting for 1m20s Jun 17 23:28:45.968: INFO: node status heartbeat is unchanged for 7.000619192s, waiting for 1m20s Jun 17 23:28:46.968: INFO: node status heartbeat is unchanged for 8.000623639s, waiting for 1m20s Jun 17 23:28:47.968: INFO: node status heartbeat is unchanged for 9.000586906s, waiting for 1m20s Jun 17 23:28:48.968: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 17 23:28:48.972: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:28:38 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:28:48 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:28:38 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:28:48 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:28:38 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:28:48 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-17 20:04:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 17 23:28:49.969: INFO: node status heartbeat is unchanged for 1.001122563s, waiting for 1m20s Jun 17 23:28:50.968: INFO: node status heartbeat is unchanged for 2.000358144s, waiting for 1m20s Jun 17 23:28:51.971: INFO: node status heartbeat is unchanged for 3.003575542s, waiting for 1m20s Jun 17 23:28:52.969: INFO: node status heartbeat is unchanged for 4.001066541s, waiting for 1m20s Jun 17 23:28:53.968: INFO: node status heartbeat is unchanged for 5.00071333s, waiting for 1m20s Jun 17 23:28:54.967: INFO: node status heartbeat is unchanged for 5.999412671s, waiting for 1m20s Jun 17 23:28:55.991: INFO: node status heartbeat is unchanged for 7.023968991s, waiting for 1m20s Jun 17 23:28:56.970: INFO: node status heartbeat is unchanged for 8.00206348s, waiting for 1m20s Jun 17 23:28:57.970: INFO: node status heartbeat is unchanged for 9.002086063s, waiting for 1m20s Jun 17 23:28:58.969: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 17 23:28:58.973: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:28:48 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:28:58 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:28:48 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:28:58 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:28:48 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:28:58 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-17 20:04:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 17 23:28:59.968: INFO: node status heartbeat is unchanged for 999.024865ms, waiting for 1m20s Jun 17 23:29:00.968: INFO: node status heartbeat is unchanged for 1.999498842s, waiting for 1m20s Jun 17 23:29:01.969: INFO: node status heartbeat is unchanged for 3.000586246s, waiting for 1m20s Jun 17 23:29:02.970: INFO: node status heartbeat is unchanged for 4.000941023s, waiting for 1m20s Jun 17 23:29:03.968: INFO: node status heartbeat is unchanged for 4.999026056s, waiting for 1m20s Jun 17 23:29:04.967: INFO: node status heartbeat is unchanged for 5.998695143s, waiting for 1m20s Jun 17 23:29:05.968: INFO: node status heartbeat is unchanged for 6.999277342s, waiting for 1m20s Jun 17 23:29:06.968: INFO: node status heartbeat is unchanged for 7.999044279s, waiting for 1m20s Jun 17 23:29:07.969: INFO: node status heartbeat is unchanged for 8.999889368s, waiting for 1m20s Jun 17 23:29:08.968: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 17 23:29:08.972: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:28:58 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:29:08 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:28:58 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:29:08 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:28:58 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:29:08 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-17 20:04:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 17 23:29:09.969: INFO: node status heartbeat is unchanged for 1.001463766s, waiting for 1m20s Jun 17 23:29:10.969: INFO: node status heartbeat is unchanged for 2.001744569s, waiting for 1m20s Jun 17 23:29:11.967: INFO: node status heartbeat is unchanged for 2.999540273s, waiting for 1m20s Jun 17 23:29:12.968: INFO: node status heartbeat is unchanged for 4.000141898s, waiting for 1m20s Jun 17 23:29:13.968: INFO: node status heartbeat is unchanged for 4.999995567s, waiting for 1m20s Jun 17 23:29:14.968: INFO: node status heartbeat is unchanged for 6.000527649s, waiting for 1m20s Jun 17 23:29:15.970: INFO: node status heartbeat is unchanged for 7.002624761s, waiting for 1m20s Jun 17 23:29:16.968: INFO: node status heartbeat is unchanged for 8.000175689s, waiting for 1m20s Jun 17 23:29:17.969: INFO: node status heartbeat is unchanged for 9.001085989s, waiting for 1m20s Jun 17 23:29:18.968: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 17 23:29:18.973: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:29:08 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:29:18 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:29:08 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:29:18 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:29:08 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:29:18 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-17 20:04:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 17 23:29:19.968: INFO: node status heartbeat is unchanged for 1.000718289s, waiting for 1m20s Jun 17 23:29:20.969: INFO: node status heartbeat is unchanged for 2.00095331s, waiting for 1m20s Jun 17 23:29:21.968: INFO: node status heartbeat is unchanged for 3.000030663s, waiting for 1m20s Jun 17 23:29:22.969: INFO: node status heartbeat is unchanged for 4.001058192s, waiting for 1m20s Jun 17 23:29:23.969: INFO: node status heartbeat is unchanged for 5.001256818s, waiting for 1m20s Jun 17 23:29:24.969: INFO: node status heartbeat is unchanged for 6.001269846s, waiting for 1m20s Jun 17 23:29:25.967: INFO: node status heartbeat is unchanged for 6.999753498s, waiting for 1m20s Jun 17 23:29:26.970: INFO: node status heartbeat is unchanged for 8.002445266s, waiting for 1m20s Jun 17 23:29:27.967: INFO: node status heartbeat is unchanged for 8.999437905s, waiting for 1m20s Jun 17 23:29:28.968: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 17 23:29:28.973: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:29:18 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:29:28 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:29:18 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:29:28 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:29:18 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:29:28 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-17 20:04:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 17 23:29:29.968: INFO: node status heartbeat is unchanged for 999.996315ms, waiting for 1m20s Jun 17 23:29:30.970: INFO: node status heartbeat is unchanged for 2.001781635s, waiting for 1m20s Jun 17 23:29:31.972: INFO: node status heartbeat is unchanged for 3.003186926s, waiting for 1m20s Jun 17 23:29:32.969: INFO: node status heartbeat is unchanged for 4.000319914s, waiting for 1m20s Jun 17 23:29:33.968: INFO: node status heartbeat is unchanged for 4.999632715s, waiting for 1m20s Jun 17 23:29:34.968: INFO: node status heartbeat is unchanged for 5.99949803s, waiting for 1m20s Jun 17 23:29:35.967: INFO: node status heartbeat is unchanged for 6.999019206s, waiting for 1m20s Jun 17 23:29:36.967: INFO: node status heartbeat is unchanged for 7.999015327s, waiting for 1m20s Jun 17 23:29:37.969: INFO: node status heartbeat is unchanged for 9.000452762s, waiting for 1m20s Jun 17 23:29:38.969: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 17 23:29:38.973: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:29:28 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:29:38 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:29:28 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:29:38 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:29:28 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:29:38 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-17 20:04:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 17 23:29:39.970: INFO: node status heartbeat is unchanged for 1.001055035s, waiting for 1m20s Jun 17 23:29:40.967: INFO: node status heartbeat is unchanged for 1.998631958s, waiting for 1m20s Jun 17 23:29:41.968: INFO: node status heartbeat is unchanged for 2.999637934s, waiting for 1m20s Jun 17 23:29:42.968: INFO: node status heartbeat is unchanged for 3.999077737s, waiting for 1m20s Jun 17 23:29:43.969: INFO: node status heartbeat is unchanged for 5.000407499s, waiting for 1m20s Jun 17 23:29:44.969: INFO: node status heartbeat is unchanged for 6.000272103s, waiting for 1m20s Jun 17 23:29:45.968: INFO: node status heartbeat is unchanged for 6.999791508s, waiting for 1m20s Jun 17 23:29:46.967: INFO: node status heartbeat is unchanged for 7.998513027s, waiting for 1m20s Jun 17 23:29:47.968: INFO: node status heartbeat is unchanged for 8.999815341s, waiting for 1m20s Jun 17 23:29:48.969: INFO: node status heartbeat is unchanged for 10.000124819s, waiting for 1m20s Jun 17 23:29:49.967: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 17 23:29:49.972: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:29:38 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:29:48 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:29:38 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:29:48 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:29:38 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:29:48 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-17 20:04:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 17 23:29:50.968: INFO: node status heartbeat is unchanged for 1.000972048s, waiting for 1m20s Jun 17 23:29:51.968: INFO: node status heartbeat is unchanged for 2.000467712s, waiting for 1m20s Jun 17 23:29:52.967: INFO: node status heartbeat is unchanged for 2.999929957s, waiting for 1m20s Jun 17 23:29:53.968: INFO: node status heartbeat is unchanged for 4.00079709s, waiting for 1m20s Jun 17 23:29:54.967: INFO: node status heartbeat is unchanged for 4.999415687s, waiting for 1m20s Jun 17 23:29:55.967: INFO: node status heartbeat is unchanged for 5.99934986s, waiting for 1m20s Jun 17 23:29:56.969: INFO: node status heartbeat is unchanged for 7.001531526s, waiting for 1m20s Jun 17 23:29:57.969: INFO: node status heartbeat is unchanged for 8.001437906s, waiting for 1m20s Jun 17 23:29:58.968: INFO: node status heartbeat is unchanged for 9.000475719s, waiting for 1m20s Jun 17 23:29:59.971: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s Jun 17 23:29:59.975: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:29:48 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:29:59 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:29:48 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:29:59 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:29:48 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:29:59 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-17 20:04:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 17 23:30:00.968: INFO: node status heartbeat is unchanged for 997.010627ms, waiting for 1m20s Jun 17 23:30:01.969: INFO: node status heartbeat is unchanged for 1.998450525s, waiting for 1m20s Jun 17 23:30:02.970: INFO: node status heartbeat is unchanged for 2.999932188s, waiting for 1m20s Jun 17 23:30:03.968: INFO: node status heartbeat is unchanged for 3.997839686s, waiting for 1m20s Jun 17 23:30:04.969: INFO: node status heartbeat is unchanged for 4.998122966s, waiting for 1m20s Jun 17 23:30:05.967: INFO: node status heartbeat is unchanged for 5.996092462s, waiting for 1m20s Jun 17 23:30:06.969: INFO: node status heartbeat is unchanged for 6.998236039s, waiting for 1m20s Jun 17 23:30:07.968: INFO: node status heartbeat is unchanged for 7.997210835s, waiting for 1m20s Jun 17 23:30:08.967: INFO: node status heartbeat is unchanged for 8.996595017s, waiting for 1m20s Jun 17 23:30:09.968: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 17 23:30:09.972: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:29:59 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:30:09 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:29:59 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:30:09 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:29:59 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:30:09 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-17 20:04:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 17 23:30:10.967: INFO: node status heartbeat is unchanged for 999.147371ms, waiting for 1m20s Jun 17 23:30:11.969: INFO: node status heartbeat is unchanged for 2.000747463s, waiting for 1m20s Jun 17 23:30:12.968: INFO: node status heartbeat is unchanged for 3.000473463s, waiting for 1m20s Jun 17 23:30:13.967: INFO: node status heartbeat is unchanged for 3.999449686s, waiting for 1m20s Jun 17 23:30:14.967: INFO: node status heartbeat is unchanged for 4.999424589s, waiting for 1m20s Jun 17 23:30:15.970: INFO: node status heartbeat is unchanged for 6.00240334s, waiting for 1m20s Jun 17 23:30:16.968: INFO: node status heartbeat is unchanged for 7.000366899s, waiting for 1m20s Jun 17 23:30:17.968: INFO: node status heartbeat is unchanged for 8.000456013s, waiting for 1m20s Jun 17 23:30:18.968: INFO: node status heartbeat is unchanged for 8.999886372s, waiting for 1m20s Jun 17 23:30:19.969: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 17 23:30:19.974: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-17 20:04:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:30:09 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:30:19 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:30:09 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:30:19 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:30:09 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-06-17 23:30:19 +0000 UTC"},    LastTransitionTime: {Time: s"2022-06-17 20:00:37 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-17 20:04:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Jun 17 23:30:20.968: INFO: node status heartbeat is unchanged for 998.860163ms, waiting for 1m20s Jun 17 23:30:21.970: INFO: node status heartbeat is unchanged for 2.000565762s, waiting for 1m20s Jun 17 23:30:22.968: INFO: node status heartbeat is unchanged for 2.999172462s, waiting for 1m20s Jun 17 23:30:23.967: INFO: node status heartbeat is unchanged for 3.998094533s, waiting for 1m20s Jun 17 23:30:24.969: INFO: node status heartbeat is unchanged for 5.000160561s, waiting for 1m20s Jun 17 23:30:25.969: INFO: node status heartbeat is unchanged for 6.000230608s, waiting for 1m20s Jun 17 23:30:26.971: INFO: node status heartbeat is unchanged for 7.001586686s, waiting for 1m20s Jun 17 23:30:27.969: INFO: node status heartbeat is unchanged for 7.999575044s, waiting for 1m20s Jun 17 23:30:27.971: INFO: node status heartbeat is unchanged for 8.00210734s, waiting for 1m20s STEP: verify node is still in ready status even though node status report is infrequent [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:30:27.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-340" for this suite. • [SLOW TEST:300.064 seconds] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when the NodeLease feature is enabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:49 the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently","total":-1,"completed":7,"skipped":1060,"failed":0} Jun 17 23:30:27.991: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:25:22.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:681 Jun 17 23:25:22.915: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Jun 17 23:25:24.919: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Jun 17 23:25:26.924: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Jun 17 23:25:28.919: INFO: The status of Pod pod-back-off-image is Running (Ready = true) STEP: getting restart delay-0 Jun 17 23:27:23.154: INFO: getRestartDelay: restartCount = 4, finishedAt=2022-06-17 23:26:29 +0000 UTC restartedAt=2022-06-17 23:27:21 +0000 UTC (52s) STEP: getting restart delay-1 Jun 17 23:29:02.591: INFO: getRestartDelay: restartCount = 5, finishedAt=2022-06-17 23:27:26 +0000 UTC restartedAt=2022-06-17 23:29:00 +0000 UTC (1m34s) STEP: getting restart delay-2 Jun 17 23:31:58.413: INFO: getRestartDelay: restartCount = 6, finishedAt=2022-06-17 23:29:05 +0000 UTC restartedAt=2022-06-17 23:31:56 +0000 UTC (2m51s) STEP: updating the image Jun 17 23:31:58.924: INFO: Successfully updated pod "pod-back-off-image" STEP: get restart delay after image update Jun 17 23:32:21.001: INFO: getRestartDelay: restartCount = 8, finishedAt=2022-06-17 23:32:07 +0000 UTC restartedAt=2022-06-17 23:32:19 +0000 UTC (12s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:32:21.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5634" for this suite. • [SLOW TEST:418.139 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:681 ------------------------------ {"msg":"PASSED [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]","total":-1,"completed":4,"skipped":230,"failed":0} Jun 17 23:32:21.018: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:23:40.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods W0617 23:23:40.625455 31 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 17 23:23:40.625: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 17 23:23:40.627: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:722 Jun 17 23:23:40.642: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Jun 17 23:23:42.645: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Jun 17 23:23:44.646: INFO: The status of Pod back-off-cap is Running (Ready = true) STEP: getting restart delay when capped Jun 17 23:35:29.106: INFO: getRestartDelay: restartCount = 7, finishedAt=2022-06-17 23:30:13 +0000 UTC restartedAt=2022-06-17 23:35:27 +0000 UTC (5m14s) Jun 17 23:40:38.573: INFO: getRestartDelay: restartCount = 8, finishedAt=2022-06-17 23:35:32 +0000 UTC restartedAt=2022-06-17 23:40:36 +0000 UTC (5m4s) Jun 17 23:45:44.021: INFO: getRestartDelay: restartCount = 9, finishedAt=2022-06-17 23:40:41 +0000 UTC restartedAt=2022-06-17 23:45:42 +0000 UTC (5m1s) STEP: getting restart delay after a capped delay Jun 17 23:51:04.605: INFO: getRestartDelay: restartCount = 10, finishedAt=2022-06-17 23:45:47 +0000 UTC restartedAt=2022-06-17 23:51:02 +0000 UTC (5m15s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:51:04.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1477" for this suite. • [SLOW TEST:1644.010 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:722 ------------------------------ {"msg":"PASSED [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]","total":-1,"completed":1,"skipped":159,"failed":0} Jun 17 23:51:04.616: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe","total":-1,"completed":4,"skipped":355,"failed":0} Jun 17 23:28:08.544: INFO: Running AfterSuite actions on all nodes Jun 17 23:51:04.651: INFO: Running AfterSuite actions on node 1 Jun 17 23:51:04.652: INFO: Skipping dumping logs from cluster Summarizing 1 Failure: [Panic!] [sig-node] Probing container [It] should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 Ran 53 of 5773 Specs in 1644.632 seconds FAIL! -- 52 Passed | 1 Failed | 0 Pending | 5720 Skipped Ginkgo ran 1 suite in 27m26.236959834s Test Suite Failed