Running Suite: Kubernetes e2e suite =================================== Random Seed: 1637548468 - Will randomize all specs Will run 5770 specs Running in parallel across 10 nodes Nov 22 02:34:29.940: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:34:29.944: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Nov 22 02:34:29.972: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 22 02:34:30.049: INFO: The status of Pod cmk-init-discover-node1-brwt6 is Succeeded, skipping waiting Nov 22 02:34:30.049: INFO: The status of Pod cmk-init-discover-node2-8jdqf is Succeeded, skipping waiting Nov 22 02:34:30.049: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 22 02:34:30.049: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Nov 22 02:34:30.049: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Nov 22 02:34:30.067: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Nov 22 02:34:30.067: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Nov 22 02:34:30.067: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Nov 22 02:34:30.067: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Nov 22 02:34:30.067: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Nov 22 02:34:30.067: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Nov 22 02:34:30.067: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Nov 22 02:34:30.067: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Nov 22 02:34:30.067: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Nov 22 02:34:30.067: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Nov 22 02:34:30.067: INFO: e2e test version: v1.21.5 Nov 22 02:34:30.068: INFO: kube-apiserver version: v1.21.1 Nov 22 02:34:30.069: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:34:30.076: INFO: Cluster IP family: ipv4 SSSSSS ------------------------------ Nov 22 02:34:30.072: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:34:30.093: INFO: Cluster IP family: ipv4 SS ------------------------------ Nov 22 02:34:30.074: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:34:30.096: INFO: Cluster IP family: ipv4 Nov 22 02:34:30.073: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:34:30.096: INFO: Cluster IP family: ipv4 SSS ------------------------------ Nov 22 02:34:30.079: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:34:30.101: INFO: Cluster IP family: ipv4 SSSSSS ------------------------------ Nov 22 02:34:30.084: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:34:30.106: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Nov 22 02:34:30.099: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:34:30.121: INFO: Cluster IP family: ipv4 S ------------------------------ Nov 22 02:34:30.103: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:34:30.124: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Nov 22 02:34:30.117: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:34:30.138: INFO: Cluster IP family: ipv4 SSSSS ------------------------------ Nov 22 02:34:30.118: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:34:30.140: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:34:30.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test W1122 02:34:30.386317 29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 22 02:34:30.386: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 22 02:34:30.388: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] should have OwnerReferences set /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:88 [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:34:30.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-6727" for this suite. •SSSS ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":1,"skipped":61,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:34:30.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-problem-detector W1122 02:34:30.575504 31 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 22 02:34:30.575: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 22 02:34:30.577: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:52 Nov 22 02:34:30.579: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:34:30.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-problem-detector-8644" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.041 seconds] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should run without error [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:60 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:53 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:34:30.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-privileged-pod W1122 02:34:30.237682 26 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 22 02:34:30.237: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 22 02:34:30.239: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49 STEP: Creating a pod with a privileged container Nov 22 02:34:30.260: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Nov 22 02:34:32.264: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Nov 22 02:34:34.263: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Nov 22 02:34:36.264: INFO: The status of Pod privileged-pod is Running (Ready = true) STEP: Executing in the privileged container Nov 22 02:34:36.267: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-7340 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:34:36.267: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:34:36.362: INFO: ExecWithOptions {Command:[ip link del dummy1] Namespace:e2e-privileged-pod-7340 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:34:36.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Executing in the non-privileged container Nov 22 02:34:36.485: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-7340 PodName:privileged-pod ContainerName:not-privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:34:36.485: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:34:36.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-privileged-pod-7340" for this suite. • [SLOW TEST:6.354 seconds] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49 ------------------------------ {"msg":"PASSED [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":1,"skipped":14,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:34:30.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test W1122 02:34:30.552158 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 22 02:34:30.552: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 22 02:34:30.553: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 Nov 22 02:34:30.567: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-2378" to be "Succeeded or Failed" Nov 22 02:34:30.569: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108836ms Nov 22 02:34:32.573: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00658402s Nov 22 02:34:34.579: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011978226s Nov 22 02:34:36.587: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020060857s Nov 22 02:34:36.587: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:34:36.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2378" for this suite. • [SLOW TEST:6.077 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":1,"skipped":129,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:34:36.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 Nov 22 02:34:36.787: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:34:36.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-6727" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 should enforce an AppArmor profile [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:43 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:284 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:34:30.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context W1122 02:34:30.344631 38 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 22 02:34:30.344: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 22 02:34:30.349: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Nov 22 02:34:30.362: INFO: Waiting up to 5m0s for pod "security-context-73f5f91a-7e45-41b1-816a-e08b9987a1a7" in namespace "security-context-9549" to be "Succeeded or Failed" Nov 22 02:34:30.365: INFO: Pod "security-context-73f5f91a-7e45-41b1-816a-e08b9987a1a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.65513ms Nov 22 02:34:32.368: INFO: Pod "security-context-73f5f91a-7e45-41b1-816a-e08b9987a1a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006076847s Nov 22 02:34:34.372: INFO: Pod "security-context-73f5f91a-7e45-41b1-816a-e08b9987a1a7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010269172s Nov 22 02:34:36.376: INFO: Pod "security-context-73f5f91a-7e45-41b1-816a-e08b9987a1a7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014113237s Nov 22 02:34:38.379: INFO: Pod "security-context-73f5f91a-7e45-41b1-816a-e08b9987a1a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.016596353s STEP: Saw pod success Nov 22 02:34:38.379: INFO: Pod "security-context-73f5f91a-7e45-41b1-816a-e08b9987a1a7" satisfied condition "Succeeded or Failed" Nov 22 02:34:38.381: INFO: Trying to get logs from node node2 pod security-context-73f5f91a-7e45-41b1-816a-e08b9987a1a7 container test-container: STEP: delete the pod Nov 22 02:34:38.405: INFO: Waiting for pod security-context-73f5f91a-7e45-41b1-816a-e08b9987a1a7 to disappear Nov 22 02:34:38.408: INFO: Pod security-context-73f5f91a-7e45-41b1-816a-e08b9987a1a7 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:34:38.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-9549" for this suite. • [SLOW TEST:8.091 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":1,"skipped":51,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:34:38.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crictl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:33 Nov 22 02:34:38.834: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:34:38.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crictl-1912" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should be able to run crictl on the node [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:40 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:35 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:34:30.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:446 [It] should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 STEP: creating the pod that should always exit 0 STEP: submitting the pod to kubernetes Nov 22 02:34:30.839: INFO: Waiting up to 5m0s for pod "pod-always-succeedda1a192f-6d09-46b5-b3d9-874607305dd5" in namespace "pods-2597" to be "Succeeded or Failed" Nov 22 02:34:30.842: INFO: Pod "pod-always-succeedda1a192f-6d09-46b5-b3d9-874607305dd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.706996ms Nov 22 02:34:32.845: INFO: Pod "pod-always-succeedda1a192f-6d09-46b5-b3d9-874607305dd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006475024s Nov 22 02:34:34.850: INFO: Pod "pod-always-succeedda1a192f-6d09-46b5-b3d9-874607305dd5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01101854s Nov 22 02:34:36.854: INFO: Pod "pod-always-succeedda1a192f-6d09-46b5-b3d9-874607305dd5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014626079s STEP: Saw pod success Nov 22 02:34:36.854: INFO: Pod "pod-always-succeedda1a192f-6d09-46b5-b3d9-874607305dd5" satisfied condition "Succeeded or Failed" STEP: Getting events about the pod STEP: Checking events about the pod STEP: deleting the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:34:38.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2597" for this suite. • [SLOW TEST:8.079 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:444 should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done","total":-1,"completed":1,"skipped":260,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:34:38.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ssh STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:36 Nov 22 02:34:38.964: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:34:38.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ssh-5021" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should SSH to all nodes and run commands [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:42 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:34:30.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W1122 02:34:30.235483 24 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 22 02:34:30.235: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 22 02:34:30.238: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:477 STEP: Creating pod startup-override-fb0a569b-6302-4f5c-b2c0-fec7ea6dec18 in namespace container-probe-2878 Nov 22 02:34:36.263: INFO: Started pod startup-override-fb0a569b-6302-4f5c-b2c0-fec7ea6dec18 in namespace container-probe-2878 STEP: checking the pod's current state and verifying that restartCount is present Nov 22 02:34:36.266: INFO: Initial restart count of pod startup-override-fb0a569b-6302-4f5c-b2c0-fec7ea6dec18 is 0 Nov 22 02:34:40.276: INFO: Restart count of pod container-probe-2878/startup-override-fb0a569b-6302-4f5c-b2c0-fec7ea6dec18 is now 1 (4.010438011s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:34:40.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2878" for this suite. • [SLOW TEST:10.081 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:477 ------------------------------ {"msg":"PASSED [sig-node] Probing container should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod]","total":-1,"completed":1,"skipped":31,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:34:40.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 Nov 22 02:34:40.362: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:34:40.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-8579" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 can disable an AppArmor profile, using unconfined [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:47 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:284 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:34:30.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context W1122 02:34:30.336813 27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 22 02:34:30.337: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 22 02:34:30.338: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups Nov 22 02:34:30.352: INFO: Waiting up to 5m0s for pod "security-context-2fa7bf40-dba9-44d4-9ef5-0878116f3232" in namespace "security-context-3123" to be "Succeeded or Failed" Nov 22 02:34:30.353: INFO: Pod "security-context-2fa7bf40-dba9-44d4-9ef5-0878116f3232": Phase="Pending", Reason="", readiness=false. Elapsed: 1.862426ms Nov 22 02:34:32.358: INFO: Pod "security-context-2fa7bf40-dba9-44d4-9ef5-0878116f3232": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006735112s Nov 22 02:34:34.362: INFO: Pod "security-context-2fa7bf40-dba9-44d4-9ef5-0878116f3232": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009983508s Nov 22 02:34:36.366: INFO: Pod "security-context-2fa7bf40-dba9-44d4-9ef5-0878116f3232": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014633438s Nov 22 02:34:38.370: INFO: Pod "security-context-2fa7bf40-dba9-44d4-9ef5-0878116f3232": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018449443s Nov 22 02:34:40.373: INFO: Pod "security-context-2fa7bf40-dba9-44d4-9ef5-0878116f3232": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.021277835s STEP: Saw pod success Nov 22 02:34:40.373: INFO: Pod "security-context-2fa7bf40-dba9-44d4-9ef5-0878116f3232" satisfied condition "Succeeded or Failed" Nov 22 02:34:40.375: INFO: Trying to get logs from node node2 pod security-context-2fa7bf40-dba9-44d4-9ef5-0878116f3232 container test-container: STEP: delete the pod Nov 22 02:34:40.488: INFO: Waiting for pod security-context-2fa7bf40-dba9-44d4-9ef5-0878116f3232 to disappear Nov 22 02:34:40.490: INFO: Pod security-context-2fa7bf40-dba9-44d4-9ef5-0878116f3232 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:34:40.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-3123" for this suite. • [SLOW TEST:10.180 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":1,"skipped":51,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:34:30.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Nov 22 02:34:30.588: INFO: Waiting up to 5m0s for pod "security-context-55dd1692-91fe-4e8f-a9b5-223b724c8a4f" in namespace "security-context-9150" to be "Succeeded or Failed" Nov 22 02:34:30.590: INFO: Pod "security-context-55dd1692-91fe-4e8f-a9b5-223b724c8a4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225114ms Nov 22 02:34:32.594: INFO: Pod "security-context-55dd1692-91fe-4e8f-a9b5-223b724c8a4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005659011s Nov 22 02:34:34.598: INFO: Pod "security-context-55dd1692-91fe-4e8f-a9b5-223b724c8a4f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009698075s Nov 22 02:34:36.601: INFO: Pod "security-context-55dd1692-91fe-4e8f-a9b5-223b724c8a4f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013466523s Nov 22 02:34:38.605: INFO: Pod "security-context-55dd1692-91fe-4e8f-a9b5-223b724c8a4f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.01696881s Nov 22 02:34:40.608: INFO: Pod "security-context-55dd1692-91fe-4e8f-a9b5-223b724c8a4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.019974226s STEP: Saw pod success Nov 22 02:34:40.608: INFO: Pod "security-context-55dd1692-91fe-4e8f-a9b5-223b724c8a4f" satisfied condition "Succeeded or Failed" Nov 22 02:34:40.611: INFO: Trying to get logs from node node2 pod security-context-55dd1692-91fe-4e8f-a9b5-223b724c8a4f container test-container: STEP: delete the pod Nov 22 02:34:40.626: INFO: Waiting for pod security-context-55dd1692-91fe-4e8f-a9b5-223b724c8a4f to disappear Nov 22 02:34:40.628: INFO: Pod security-context-55dd1692-91fe-4e8f-a9b5-223b724c8a4f no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:34:40.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-9150" for this suite. • [SLOW TEST:10.079 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 ------------------------------ SS ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]","total":-1,"completed":2,"skipped":125,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:34:36.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 Nov 22 02:34:36.840: INFO: Waiting up to 5m0s for pod "busybox-user-0-db5475d6-8272-4c4b-b253-f7f63c84f60a" in namespace "security-context-test-2633" to be "Succeeded or Failed" Nov 22 02:34:36.842: INFO: Pod "busybox-user-0-db5475d6-8272-4c4b-b253-f7f63c84f60a": Phase="Pending", Reason="", readiness=false. Elapsed: 1.884585ms Nov 22 02:34:38.845: INFO: Pod "busybox-user-0-db5475d6-8272-4c4b-b253-f7f63c84f60a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005029236s Nov 22 02:34:40.848: INFO: Pod "busybox-user-0-db5475d6-8272-4c4b-b253-f7f63c84f60a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007856857s Nov 22 02:34:40.848: INFO: Pod "busybox-user-0-db5475d6-8272-4c4b-b253-f7f63c84f60a" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:34:40.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2633" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":129,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:34:38.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367 Nov 22 02:34:38.974: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-9ca1136f-2abe-46f9-8ba6-66574702b081" in namespace "security-context-test-8553" to be "Succeeded or Failed" Nov 22 02:34:38.978: INFO: Pod "alpine-nnp-true-9ca1136f-2abe-46f9-8ba6-66574702b081": Phase="Pending", Reason="", readiness=false. Elapsed: 3.241891ms Nov 22 02:34:40.980: INFO: Pod "alpine-nnp-true-9ca1136f-2abe-46f9-8ba6-66574702b081": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005854439s Nov 22 02:34:42.984: INFO: Pod "alpine-nnp-true-9ca1136f-2abe-46f9-8ba6-66574702b081": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009240099s Nov 22 02:34:44.987: INFO: Pod "alpine-nnp-true-9ca1136f-2abe-46f9-8ba6-66574702b081": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012358677s Nov 22 02:34:44.987: INFO: Pod "alpine-nnp-true-9ca1136f-2abe-46f9-8ba6-66574702b081" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:34:44.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8553" for this suite. • [SLOW TEST:6.061 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":289,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:34:39.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:34:45.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9857" for this suite. • [SLOW TEST:6.043 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139 ------------------------------ SS ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":2,"skipped":397,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:34:45.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 STEP: create image pull secret STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:34:50.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4602" for this suite. • [SLOW TEST:5.096 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":3,"skipped":403,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:34:50.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should reject a Pod requesting a RuntimeClass with conflicting node selector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:41 [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:34:50.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-8704" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector","total":-1,"completed":4,"skipped":426,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:34:45.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Nov 22 02:34:45.290: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 STEP: creating secret and pod Nov 22 02:34:45.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-1882 create -f -' Nov 22 02:34:45.744: INFO: stderr: "" Nov 22 02:34:45.744: INFO: stdout: "secret/test-secret created\n" Nov 22 02:34:45.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-1882 create -f -' Nov 22 02:34:46.041: INFO: stderr: "" Nov 22 02:34:46.041: INFO: stdout: "pod/secret-test-pod created\n" STEP: checking if secret was read correctly Nov 22 02:34:52.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-1882 logs secret-test-pod test-container' Nov 22 02:34:52.221: INFO: stderr: "" Nov 22 02:34:52.221: INFO: stdout: "content of file \"/etc/secret-volume/data-1\": value-1\r\n\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:34:52.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-1882" for this suite. • [SLOW TEST:6.973 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:113 should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Secret should create a pod that reads a secret","total":-1,"completed":3,"skipped":431,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:34:50.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Nov 22 02:34:55.510: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:34:55.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8004" for this suite. • [SLOW TEST:5.067 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":5,"skipped":446,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:34:30.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop W1122 02:34:30.651899 30 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 22 02:34:30.652: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 22 02:34:30.653: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: waiting for pod running STEP: deleting the pod gracefully STEP: verifying the pod is running while in the graceful period termination Nov 22 02:34:56.711: INFO: pod is running [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:34:56.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-8889" for this suite. • [SLOW TEST:26.088 seconds] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 ------------------------------ {"msg":"PASSED [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process","total":-1,"completed":1,"skipped":186,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:34:52.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:449 STEP: Creating pod liveness-override-3228349f-e554-48e2-8cdd-c43a07fc8329 in namespace container-probe-7856 Nov 22 02:34:58.396: INFO: Started pod liveness-override-3228349f-e554-48e2-8cdd-c43a07fc8329 in namespace container-probe-7856 STEP: checking the pod's current state and verifying that restartCount is present Nov 22 02:34:58.399: INFO: Initial restart count of pod liveness-override-3228349f-e554-48e2-8cdd-c43a07fc8329 is 0 Nov 22 02:35:00.406: INFO: Restart count of pod container-probe-7856/liveness-override-3228349f-e554-48e2-8cdd-c43a07fc8329 is now 1 (2.006662512s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:35:00.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7856" for this suite. • [SLOW TEST:8.062 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:449 ------------------------------ {"msg":"PASSED [sig-node] Probing container should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod]","total":-1,"completed":4,"skipped":498,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:34:56.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:35:00.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4156" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":2,"skipped":293,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:34:55.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 Nov 22 02:34:55.657: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-f70c8fbc-aa0e-4f7d-9463-16c56ad32500" in namespace "security-context-test-1624" to be "Succeeded or Failed" Nov 22 02:34:55.659: INFO: Pod "busybox-readonly-true-f70c8fbc-aa0e-4f7d-9463-16c56ad32500": Phase="Pending", Reason="", readiness=false. Elapsed: 2.466468ms Nov 22 02:34:57.664: INFO: Pod "busybox-readonly-true-f70c8fbc-aa0e-4f7d-9463-16c56ad32500": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007472439s Nov 22 02:34:59.669: INFO: Pod "busybox-readonly-true-f70c8fbc-aa0e-4f7d-9463-16c56ad32500": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011610763s Nov 22 02:35:01.673: INFO: Pod "busybox-readonly-true-f70c8fbc-aa0e-4f7d-9463-16c56ad32500": Phase="Failed", Reason="", readiness=false. Elapsed: 6.015851794s Nov 22 02:35:01.673: INFO: Pod "busybox-readonly-true-f70c8fbc-aa0e-4f7d-9463-16c56ad32500" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:35:01.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1624" for this suite. • [SLOW TEST:6.053 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with readOnlyRootFilesystem /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171 should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":6,"skipped":496,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:34:30.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename mount-propagation W1122 02:34:30.261122 34 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 22 02:34:30.261: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 22 02:34:30.263: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 Nov 22 02:34:30.294: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Nov 22 02:34:32.297: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Nov 22 02:34:34.300: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Nov 22 02:34:36.299: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Nov 22 02:34:38.302: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Nov 22 02:34:40.297: INFO: The status of Pod master is Running (Ready = true) Nov 22 02:34:40.311: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Nov 22 02:34:42.318: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Nov 22 02:34:44.317: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Nov 22 02:34:46.314: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Nov 22 02:34:48.316: INFO: The status of Pod slave is Running (Ready = true) Nov 22 02:34:48.334: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Nov 22 02:34:50.337: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Nov 22 02:34:52.339: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Nov 22 02:34:54.339: INFO: The status of Pod private is Running (Ready = true) Nov 22 02:34:54.354: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Nov 22 02:34:56.358: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Nov 22 02:34:58.360: INFO: The status of Pod default is Running (Ready = true) Nov 22 02:34:58.366: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-4243 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:34:58.366: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:34:58.453: INFO: Exec stderr: "" Nov 22 02:34:58.456: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-4243 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:34:58.456: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:34:58.548: INFO: Exec stderr: "" Nov 22 02:34:58.550: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-4243 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:34:58.550: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:34:58.630: INFO: Exec stderr: "" Nov 22 02:34:58.632: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-4243 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:34:58.632: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:34:58.711: INFO: Exec stderr: "" Nov 22 02:34:58.713: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-4243 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:34:58.713: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:34:58.803: INFO: Exec stderr: "" Nov 22 02:34:58.806: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-4243 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:34:58.806: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:34:58.888: INFO: Exec stderr: "" Nov 22 02:34:58.891: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-4243 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:34:58.891: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:34:58.969: INFO: Exec stderr: "" Nov 22 02:34:58.971: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-4243 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:34:58.971: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:34:59.052: INFO: Exec stderr: "" Nov 22 02:34:59.054: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-4243 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:34:59.054: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:34:59.133: INFO: Exec stderr: "" Nov 22 02:34:59.137: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-4243 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:34:59.137: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:34:59.216: INFO: Exec stderr: "" Nov 22 02:34:59.220: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-4243 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:34:59.220: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:34:59.318: INFO: Exec stderr: "" Nov 22 02:34:59.322: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-4243 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:34:59.322: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:34:59.410: INFO: Exec stderr: "" Nov 22 02:34:59.413: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-4243 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:34:59.413: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:34:59.497: INFO: Exec stderr: "" Nov 22 02:34:59.500: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-4243 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:34:59.500: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:34:59.586: INFO: Exec stderr: "" Nov 22 02:34:59.589: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-4243 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:34:59.589: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:34:59.677: INFO: Exec stderr: "" Nov 22 02:34:59.680: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-4243 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:34:59.680: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:34:59.764: INFO: Exec stderr: "" Nov 22 02:34:59.767: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-master /mnt/test/master; echo master > /mnt/test/master/file] Namespace:mount-propagation-4243 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:34:59.767: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:34:59.850: INFO: Exec stderr: "" Nov 22 02:34:59.852: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-slave /mnt/test/slave; echo slave > /mnt/test/slave/file] Namespace:mount-propagation-4243 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:34:59.852: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:34:59.951: INFO: Exec stderr: "" Nov 22 02:34:59.954: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-private /mnt/test/private; echo private > /mnt/test/private/file] Namespace:mount-propagation-4243 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:34:59.954: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:35:00.036: INFO: Exec stderr: "" Nov 22 02:35:00.039: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-default /mnt/test/default; echo default > /mnt/test/default/file] Namespace:mount-propagation-4243 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:35:00.039: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:35:00.119: INFO: Exec stderr: "" Nov 22 02:35:02.137: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-4243"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-4243"/host; echo host > "/var/lib/kubelet/mount-propagation-4243"/host/file] Namespace:mount-propagation-4243 PodName:hostexec-node2-jpskf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 22 02:35:02.137: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:35:02.237: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-4243 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:35:02.237: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:35:02.319: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 Nov 22 02:35:02.322: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-4243 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:35:02.322: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:35:02.402: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Nov 22 02:35:02.404: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-4243 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:35:02.404: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:35:02.488: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Nov 22 02:35:02.491: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-4243 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:35:02.491: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:35:02.652: INFO: pod default mount default: stdout: "default", stderr: "" error: Nov 22 02:35:02.654: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-4243 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:35:02.654: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:35:02.736: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 Nov 22 02:35:02.739: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-4243 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:35:02.739: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:35:02.838: INFO: pod master mount master: stdout: "master", stderr: "" error: Nov 22 02:35:02.841: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-4243 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:35:02.841: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:35:02.921: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Nov 22 02:35:02.923: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-4243 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:35:02.923: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:35:03.006: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Nov 22 02:35:03.008: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-4243 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:35:03.008: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:35:03.089: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Nov 22 02:35:03.091: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-4243 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:35:03.091: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:35:03.178: INFO: pod master mount host: stdout: "host", stderr: "" error: Nov 22 02:35:03.180: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-4243 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:35:03.180: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:35:03.269: INFO: pod slave mount master: stdout: "master", stderr: "" error: Nov 22 02:35:03.272: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-4243 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:35:03.272: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:35:03.355: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: Nov 22 02:35:03.357: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-4243 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:35:03.357: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:35:03.444: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Nov 22 02:35:03.447: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-4243 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:35:03.447: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:35:03.530: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Nov 22 02:35:03.532: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-4243 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:35:03.532: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:35:03.620: INFO: pod slave mount host: stdout: "host", stderr: "" error: Nov 22 02:35:03.622: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-4243 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:35:03.622: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:35:03.712: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 Nov 22 02:35:03.714: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-4243 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:35:03.714: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:35:03.814: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Nov 22 02:35:03.816: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-4243 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:35:03.816: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:35:03.900: INFO: pod private mount private: stdout: "private", stderr: "" error: Nov 22 02:35:03.902: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-4243 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:35:03.902: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:35:03.988: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Nov 22 02:35:03.991: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-4243 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:35:03.991: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:35:04.072: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 Nov 22 02:35:04.072: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test `cat "/var/lib/kubelet/mount-propagation-4243"/master/file` = master] Namespace:mount-propagation-4243 PodName:hostexec-node2-jpskf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 22 02:35:04.072: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:35:04.176: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -e "/var/lib/kubelet/mount-propagation-4243"/slave/file] Namespace:mount-propagation-4243 PodName:hostexec-node2-jpskf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 22 02:35:04.176: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:35:04.265: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/var/lib/kubelet/mount-propagation-4243"/host] Namespace:mount-propagation-4243 PodName:hostexec-node2-jpskf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 22 02:35:04.265: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:35:04.366: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/default] Namespace:mount-propagation-4243 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:35:04.366: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:35:04.460: INFO: Exec stderr: "" Nov 22 02:35:04.463: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/private] Namespace:mount-propagation-4243 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:35:04.463: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:35:04.551: INFO: Exec stderr: "" Nov 22 02:35:04.554: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/slave] Namespace:mount-propagation-4243 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:35:04.554: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:35:04.642: INFO: Exec stderr: "" Nov 22 02:35:04.645: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/master] Namespace:mount-propagation-4243 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 22 02:35:04.645: INFO: >>> kubeConfig: /root/.kube/config Nov 22 02:35:04.738: INFO: Exec stderr: "" Nov 22 02:35:04.738: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -rf "/var/lib/kubelet/mount-propagation-4243"] Namespace:mount-propagation-4243 PodName:hostexec-node2-jpskf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 22 02:35:04.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod hostexec-node2-jpskf in namespace mount-propagation-4243 [AfterEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:35:04.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "mount-propagation-4243" for this suite. • [SLOW TEST:34.599 seconds] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 ------------------------------ {"msg":"PASSED [sig-node] Mount propagation should propagate mounts to the host","total":-1,"completed":1,"skipped":29,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:35:04.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777 STEP: submitting the pod to kubernetes STEP: patching pod status with condition "k8s.io/test-condition1" to true STEP: patching pod status with condition "k8s.io/test-condition2" to true STEP: patching pod status with condition "k8s.io/test-condition1" to false [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:35:20.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3855" for this suite. • [SLOW TEST:16.078 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777 ------------------------------ {"msg":"PASSED [sig-node] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":2,"skipped":53,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:35:01.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:400 Nov 22 02:35:23.568: INFO: The status of Pod startup-17713752-0e63-49be-bcac-28cc55520c0b is Running (Ready = true) Nov 22 02:35:23.570: INFO: Container started at 2021-11-22 02:35:23.565766454 +0000 UTC m=+55.206966201, pod became ready at 2021-11-22 02:35:23.56813769 +0000 UTC m=+55.209337364, 2.371163ms after startupProbe succeeded [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:35:23.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2728" for this suite. • [SLOW TEST:22.054 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:400 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be ready immediately after startupProbe succeeds","total":-1,"completed":3,"skipped":567,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:35:21.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Nov 22 02:35:21.228: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that prints his name and namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:134 STEP: creating the pod Nov 22 02:35:21.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-6842 create -f -' Nov 22 02:35:21.683: INFO: stderr: "" Nov 22 02:35:21.683: INFO: stdout: "pod/dapi-test-pod created\n" STEP: checking if name and namespace were passed correctly Nov 22 02:35:25.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-6842 logs dapi-test-pod test-container' Nov 22 02:35:25.855: INFO: stderr: "" Nov 22 02:35:25.855: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.233.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-6842\nMY_POD_IP=10.244.4.82\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.208\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" Nov 22 02:35:25.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-6842 logs dapi-test-pod test-container' Nov 22 02:35:26.039: INFO: stderr: "" Nov 22 02:35:26.039: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.233.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-6842\nMY_POD_IP=10.244.4.82\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.208\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:35:26.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-6842" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Downward API should create a pod that prints his name and namespace","total":-1,"completed":3,"skipped":172,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:34:36.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:217 STEP: Creating pod busybox-414c0b78-0eef-460e-a0ac-3c0f63d14eac in namespace container-probe-4754 Nov 22 02:34:44.980: INFO: Started pod busybox-414c0b78-0eef-460e-a0ac-3c0f63d14eac in namespace container-probe-4754 STEP: checking the pod's current state and verifying that restartCount is present Nov 22 02:34:44.983: INFO: Initial restart count of pod busybox-414c0b78-0eef-460e-a0ac-3c0f63d14eac is 0 Nov 22 02:35:31.072: INFO: Restart count of pod container-probe-4754/busybox-414c0b78-0eef-460e-a0ac-3c0f63d14eac is now 1 (46.088620164s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:35:31.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4754" for this suite. • [SLOW TEST:54.143 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:217 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":2,"skipped":280,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:34:30.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W1122 02:34:30.613696 35 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 22 02:34:30.613: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 22 02:34:30.615: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a failing exec liveness probe that took longer than the timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:254 STEP: Creating pod busybox-b8702a58-b4f7-4a9b-87f3-94c1a09685a1 in namespace container-probe-1837 Nov 22 02:34:40.636: INFO: Started pod busybox-b8702a58-b4f7-4a9b-87f3-94c1a09685a1 in namespace container-probe-1837 STEP: checking the pod's current state and verifying that restartCount is present Nov 22 02:34:40.639: INFO: Initial restart count of pod busybox-b8702a58-b4f7-4a9b-87f3-94c1a09685a1 is 0 Nov 22 02:35:34.764: INFO: Restart count of pod container-probe-1837/busybox-b8702a58-b4f7-4a9b-87f3-94c1a09685a1 is now 1 (54.125384588s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:35:34.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1837" for this suite. • [SLOW TEST:64.185 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a failing exec liveness probe that took longer than the timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:254 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout","total":-1,"completed":1,"skipped":165,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:35:31.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Nov 22 02:35:31.202: INFO: Waiting up to 5m0s for pod "security-context-1a631e0a-6c9d-4c87-9fcd-0ee125332d6e" in namespace "security-context-8536" to be "Succeeded or Failed" Nov 22 02:35:31.204: INFO: Pod "security-context-1a631e0a-6c9d-4c87-9fcd-0ee125332d6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055399ms Nov 22 02:35:33.209: INFO: Pod "security-context-1a631e0a-6c9d-4c87-9fcd-0ee125332d6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006554346s Nov 22 02:35:35.212: INFO: Pod "security-context-1a631e0a-6c9d-4c87-9fcd-0ee125332d6e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009314777s Nov 22 02:35:37.216: INFO: Pod "security-context-1a631e0a-6c9d-4c87-9fcd-0ee125332d6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013594097s STEP: Saw pod success Nov 22 02:35:37.216: INFO: Pod "security-context-1a631e0a-6c9d-4c87-9fcd-0ee125332d6e" satisfied condition "Succeeded or Failed" Nov 22 02:35:37.219: INFO: Trying to get logs from node node2 pod security-context-1a631e0a-6c9d-4c87-9fcd-0ee125332d6e container test-container: STEP: delete the pod Nov 22 02:35:37.235: INFO: Waiting for pod security-context-1a631e0a-6c9d-4c87-9fcd-0ee125332d6e to disappear Nov 22 02:35:37.237: INFO: Pod security-context-1a631e0a-6c9d-4c87-9fcd-0ee125332d6e no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:35:37.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-8536" for this suite. • [SLOW TEST:6.090 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]","total":-1,"completed":3,"skipped":313,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:35:34.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 Nov 22 02:35:35.012: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-8453" to be "Succeeded or Failed" Nov 22 02:35:35.014: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 1.827952ms Nov 22 02:35:37.018: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005719452s Nov 22 02:35:39.022: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009926795s Nov 22 02:35:41.026: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014615089s Nov 22 02:35:41.026: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:35:41.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8453" for this suite. • [SLOW TEST:6.062 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":2,"skipped":269,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:35:37.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Nov 22 02:35:37.708: INFO: Waiting up to 5m0s for pod "security-context-cede3ba3-c751-4976-93ac-1aa0bcf0f386" in namespace "security-context-227" to be "Succeeded or Failed" Nov 22 02:35:37.710: INFO: Pod "security-context-cede3ba3-c751-4976-93ac-1aa0bcf0f386": Phase="Pending", Reason="", readiness=false. Elapsed: 1.831158ms Nov 22 02:35:39.716: INFO: Pod "security-context-cede3ba3-c751-4976-93ac-1aa0bcf0f386": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007041263s Nov 22 02:35:41.720: INFO: Pod "security-context-cede3ba3-c751-4976-93ac-1aa0bcf0f386": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011325145s STEP: Saw pod success Nov 22 02:35:41.720: INFO: Pod "security-context-cede3ba3-c751-4976-93ac-1aa0bcf0f386" satisfied condition "Succeeded or Failed" Nov 22 02:35:41.722: INFO: Trying to get logs from node node1 pod security-context-cede3ba3-c751-4976-93ac-1aa0bcf0f386 container test-container: STEP: delete the pod Nov 22 02:35:41.735: INFO: Waiting for pod security-context-cede3ba3-c751-4976-93ac-1aa0bcf0f386 to disappear Nov 22 02:35:41.737: INFO: Pod security-context-cede3ba3-c751-4976-93ac-1aa0bcf0f386 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:35:41.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-227" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]","total":-1,"completed":4,"skipped":542,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:35:42.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:159 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:35:46.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1201" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":5,"skipped":768,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:34:40.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:371 STEP: Creating pod startup-f5346287-8d59-4347-a9f4-167260a0fba6 in namespace container-probe-409 Nov 22 02:34:50.967: INFO: Started pod startup-f5346287-8d59-4347-a9f4-167260a0fba6 in namespace container-probe-409 STEP: checking the pod's current state and verifying that restartCount is present Nov 22 02:34:50.970: INFO: Initial restart count of pod startup-f5346287-8d59-4347-a9f4-167260a0fba6 is 0 Nov 22 02:35:53.107: INFO: Restart count of pod container-probe-409/startup-f5346287-8d59-4347-a9f4-167260a0fba6 is now 1 (1m2.136560185s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:35:53.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-409" for this suite. • [SLOW TEST:72.195 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:371 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted by liveness probe after startup probe enables it","total":-1,"completed":3,"skipped":167,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:35:41.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:53 [It] should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Nov 22 02:35:50.425: INFO: start=2021-11-22 02:35:45.396889212 +0000 UTC m=+77.035803900, now=2021-11-22 02:35:50.425753335 +0000 UTC m=+82.064668060, kubelet pod: {"metadata":{"name":"pod-submit-remove-387c2596-2b74-468c-99c2-506be7cfd7ab","namespace":"pods-7288","uid":"faf54afe-919b-4e87-8e57-ee732eb692cb","resourceVersion":"84568","creationTimestamp":"2021-11-22T02:35:41Z","deletionTimestamp":"2021-11-22T02:36:15Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"359951105"},"annotations":{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.89\"\n ],\n \"mac\": \"d2:44:d4:a0:46:0f\",\n \"default\": true,\n \"dns\": {}\n}]","k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.89\"\n ],\n \"mac\": \"d2:44:d4:a0:46:0f\",\n \"default\": true,\n \"dns\": {}\n}]","kubernetes.io/config.seen":"2021-11-22T02:35:41.379104043Z","kubernetes.io/config.source":"api","kubernetes.io/psp":"collectd"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-11-22T02:35:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"kube-api-access-zjwc9","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","args":["pause"],"resources":{},"volumeMounts":[{"name":"kube-api-access-zjwc9","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"node2","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-11-22T02:35:41Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-11-22T02:35:48Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-11-22T02:35:48Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-11-22T02:35:41Z"}],"hostIP":"10.10.190.208","podIP":"10.244.4.89","podIPs":[{"ip":"10.244.4.89"}],"startTime":"2021-11-22T02:35:41Z","containerStatuses":[{"name":"agnhost-container","state":{"terminated":{"exitCode":2,"reason":"Error","startedAt":"2021-11-22T02:35:44Z","finishedAt":"2021-11-22T02:35:46Z","containerID":"docker://6f56ca5a4b22569c3019e008da3d86db735af6b98ced5a5d301bc8424ec2b94a"}},"lastState":{},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","imageID":"docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1","containerID":"docker://6f56ca5a4b22569c3019e008da3d86db735af6b98ced5a5d301bc8424ec2b94a","started":false}],"qosClass":"BestEffort"}} Nov 22 02:35:55.414: INFO: start=2021-11-22 02:35:45.396889212 +0000 UTC m=+77.035803900, now=2021-11-22 02:35:55.414981988 +0000 UTC m=+87.053896760, kubelet pod: {"metadata":{"name":"pod-submit-remove-387c2596-2b74-468c-99c2-506be7cfd7ab","namespace":"pods-7288","uid":"faf54afe-919b-4e87-8e57-ee732eb692cb","resourceVersion":"84568","creationTimestamp":"2021-11-22T02:35:41Z","deletionTimestamp":"2021-11-22T02:36:15Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"359951105"},"annotations":{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.89\"\n ],\n \"mac\": \"d2:44:d4:a0:46:0f\",\n \"default\": true,\n \"dns\": {}\n}]","k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.89\"\n ],\n \"mac\": \"d2:44:d4:a0:46:0f\",\n \"default\": true,\n \"dns\": {}\n}]","kubernetes.io/config.seen":"2021-11-22T02:35:41.379104043Z","kubernetes.io/config.source":"api","kubernetes.io/psp":"collectd"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-11-22T02:35:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"kube-api-access-zjwc9","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","args":["pause"],"resources":{},"volumeMounts":[{"name":"kube-api-access-zjwc9","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"node2","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-11-22T02:35:41Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-11-22T02:35:48Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-11-22T02:35:48Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-11-22T02:35:41Z"}],"hostIP":"10.10.190.208","podIP":"10.244.4.89","podIPs":[{"ip":"10.244.4.89"}],"startTime":"2021-11-22T02:35:41Z","containerStatuses":[{"name":"agnhost-container","state":{"terminated":{"exitCode":2,"reason":"Error","startedAt":"2021-11-22T02:35:44Z","finishedAt":"2021-11-22T02:35:46Z","containerID":"docker://6f56ca5a4b22569c3019e008da3d86db735af6b98ced5a5d301bc8424ec2b94a"}},"lastState":{},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","imageID":"docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1","containerID":"docker://6f56ca5a4b22569c3019e008da3d86db735af6b98ced5a5d301bc8424ec2b94a","started":false}],"qosClass":"BestEffort"}} Nov 22 02:36:00.421: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:36:00.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7288" for this suite. • [SLOW TEST:19.096 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:51 should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Delete Grace Period should be submitted and removed","total":-1,"completed":3,"skipped":429,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:35:00.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237 STEP: Creating pod busybox-d143ea98-2f41-42a8-81d3-54bd48241420 in namespace container-probe-910 Nov 22 02:35:04.750: INFO: Started pod busybox-d143ea98-2f41-42a8-81d3-54bd48241420 in namespace container-probe-910 Nov 22 02:35:04.750: INFO: pod container-probe-910/busybox-d143ea98-2f41-42a8-81d3-54bd48241420 is not ready (888ns elapsed) Nov 22 02:35:06.753: INFO: pod container-probe-910/busybox-d143ea98-2f41-42a8-81d3-54bd48241420 is not ready (2.002876373s elapsed) Nov 22 02:35:08.754: INFO: pod container-probe-910/busybox-d143ea98-2f41-42a8-81d3-54bd48241420 is not ready (4.003756472s elapsed) Nov 22 02:35:10.755: INFO: pod container-probe-910/busybox-d143ea98-2f41-42a8-81d3-54bd48241420 is not ready (6.005534927s elapsed) Nov 22 02:35:12.758: INFO: pod container-probe-910/busybox-d143ea98-2f41-42a8-81d3-54bd48241420 is not ready (8.007972481s elapsed) Nov 22 02:35:14.760: INFO: pod container-probe-910/busybox-d143ea98-2f41-42a8-81d3-54bd48241420 is not ready (10.010445585s elapsed) Nov 22 02:35:16.764: INFO: pod container-probe-910/busybox-d143ea98-2f41-42a8-81d3-54bd48241420 is not ready (12.014079893s elapsed) Nov 22 02:35:18.765: INFO: pod container-probe-910/busybox-d143ea98-2f41-42a8-81d3-54bd48241420 is not ready (14.015187948s elapsed) Nov 22 02:35:20.765: INFO: pod container-probe-910/busybox-d143ea98-2f41-42a8-81d3-54bd48241420 is not ready (16.015513775s elapsed) Nov 22 02:35:22.767: INFO: pod container-probe-910/busybox-d143ea98-2f41-42a8-81d3-54bd48241420 is not ready (18.016791815s elapsed) Nov 22 02:35:24.768: INFO: pod container-probe-910/busybox-d143ea98-2f41-42a8-81d3-54bd48241420 is not ready (20.01819509s elapsed) Nov 22 02:35:26.771: INFO: pod container-probe-910/busybox-d143ea98-2f41-42a8-81d3-54bd48241420 is not ready (22.021318521s elapsed) Nov 22 02:35:28.772: INFO: pod container-probe-910/busybox-d143ea98-2f41-42a8-81d3-54bd48241420 is not ready (24.02208175s elapsed) Nov 22 02:35:30.775: INFO: pod container-probe-910/busybox-d143ea98-2f41-42a8-81d3-54bd48241420 is not ready (26.024709927s elapsed) Nov 22 02:35:32.776: INFO: pod container-probe-910/busybox-d143ea98-2f41-42a8-81d3-54bd48241420 is not ready (28.026385054s elapsed) Nov 22 02:35:34.777: INFO: pod container-probe-910/busybox-d143ea98-2f41-42a8-81d3-54bd48241420 is not ready (30.026899611s elapsed) Nov 22 02:35:36.780: INFO: pod container-probe-910/busybox-d143ea98-2f41-42a8-81d3-54bd48241420 is not ready (32.030547886s elapsed) Nov 22 02:35:38.781: INFO: pod container-probe-910/busybox-d143ea98-2f41-42a8-81d3-54bd48241420 is not ready (34.031184835s elapsed) Nov 22 02:35:40.783: INFO: pod container-probe-910/busybox-d143ea98-2f41-42a8-81d3-54bd48241420 is not ready (36.033541067s elapsed) Nov 22 02:35:42.787: INFO: pod container-probe-910/busybox-d143ea98-2f41-42a8-81d3-54bd48241420 is not ready (38.037068236s elapsed) Nov 22 02:35:44.791: INFO: pod container-probe-910/busybox-d143ea98-2f41-42a8-81d3-54bd48241420 is not ready (40.04125873s elapsed) Nov 22 02:35:46.794: INFO: pod container-probe-910/busybox-d143ea98-2f41-42a8-81d3-54bd48241420 is not ready (42.04392604s elapsed) Nov 22 02:35:48.795: INFO: pod container-probe-910/busybox-d143ea98-2f41-42a8-81d3-54bd48241420 is not ready (44.04503551s elapsed) Nov 22 02:35:50.797: INFO: pod container-probe-910/busybox-d143ea98-2f41-42a8-81d3-54bd48241420 is not ready (46.04737491s elapsed) Nov 22 02:35:52.801: INFO: pod container-probe-910/busybox-d143ea98-2f41-42a8-81d3-54bd48241420 is not ready (48.050850141s elapsed) Nov 22 02:35:54.802: INFO: pod container-probe-910/busybox-d143ea98-2f41-42a8-81d3-54bd48241420 is not ready (50.051662656s elapsed) Nov 22 02:35:56.803: INFO: pod container-probe-910/busybox-d143ea98-2f41-42a8-81d3-54bd48241420 is not ready (52.052841189s elapsed) Nov 22 02:35:58.803: INFO: pod container-probe-910/busybox-d143ea98-2f41-42a8-81d3-54bd48241420 is not ready (54.053469869s elapsed) Nov 22 02:36:00.804: INFO: pod container-probe-910/busybox-d143ea98-2f41-42a8-81d3-54bd48241420 is not ready (56.054205943s elapsed) Nov 22 02:36:02.806: INFO: pod container-probe-910/busybox-d143ea98-2f41-42a8-81d3-54bd48241420 is not ready (58.055673362s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:36:04.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-910" for this suite. • [SLOW TEST:64.107 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237 ------------------------------ {"msg":"PASSED [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":5,"skipped":650,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:36:04.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109 STEP: Creating a pod to test downward api env vars Nov 22 02:36:04.872: INFO: Waiting up to 5m0s for pod "downward-api-06c45557-bf3a-4eba-81b3-1edca4730109" in namespace "downward-api-9747" to be "Succeeded or Failed" Nov 22 02:36:04.874: INFO: Pod "downward-api-06c45557-bf3a-4eba-81b3-1edca4730109": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082929ms Nov 22 02:36:06.876: INFO: Pod "downward-api-06c45557-bf3a-4eba-81b3-1edca4730109": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004867235s Nov 22 02:36:08.880: INFO: Pod "downward-api-06c45557-bf3a-4eba-81b3-1edca4730109": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008322374s Nov 22 02:36:10.884: INFO: Pod "downward-api-06c45557-bf3a-4eba-81b3-1edca4730109": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012470549s Nov 22 02:36:12.889: INFO: Pod "downward-api-06c45557-bf3a-4eba-81b3-1edca4730109": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017516476s Nov 22 02:36:14.893: INFO: Pod "downward-api-06c45557-bf3a-4eba-81b3-1edca4730109": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.021440946s STEP: Saw pod success Nov 22 02:36:14.893: INFO: Pod "downward-api-06c45557-bf3a-4eba-81b3-1edca4730109" satisfied condition "Succeeded or Failed" Nov 22 02:36:14.896: INFO: Trying to get logs from node node2 pod downward-api-06c45557-bf3a-4eba-81b3-1edca4730109 container dapi-container: STEP: delete the pod Nov 22 02:36:14.911: INFO: Waiting for pod downward-api-06c45557-bf3a-4eba-81b3-1edca4730109 to disappear Nov 22 02:36:14.914: INFO: Pod downward-api-06c45557-bf3a-4eba-81b3-1edca4730109 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:36:14.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9747" for this suite. • [SLOW TEST:10.082 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":6,"skipped":659,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:34:40.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Nov 22 02:34:40.498: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 Nov 22 02:34:40.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-7576 create -f -' Nov 22 02:34:41.059: INFO: stderr: "" Nov 22 02:34:41.059: INFO: stdout: "pod/liveness-exec created\n" Nov 22 02:34:41.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-7576 create -f -' Nov 22 02:34:41.374: INFO: stderr: "" Nov 22 02:34:41.374: INFO: stdout: "pod/liveness-http created\n" STEP: Check restarts Nov 22 02:34:51.383: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:34:51.383: INFO: Pod: liveness-http, restart count:0 Nov 22 02:34:53.387: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:34:53.387: INFO: Pod: liveness-http, restart count:0 Nov 22 02:34:55.391: INFO: Pod: liveness-http, restart count:0 Nov 22 02:34:55.391: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:34:57.394: INFO: Pod: liveness-http, restart count:0 Nov 22 02:34:57.394: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:34:59.397: INFO: Pod: liveness-http, restart count:0 Nov 22 02:34:59.397: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:35:01.403: INFO: Pod: liveness-http, restart count:0 Nov 22 02:35:01.403: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:35:03.406: INFO: Pod: liveness-http, restart count:0 Nov 22 02:35:03.409: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:35:05.409: INFO: Pod: liveness-http, restart count:0 Nov 22 02:35:05.413: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:35:07.418: INFO: Pod: liveness-http, restart count:0 Nov 22 02:35:07.418: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:35:09.423: INFO: Pod: liveness-http, restart count:0 Nov 22 02:35:09.423: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:35:11.427: INFO: Pod: liveness-http, restart count:0 Nov 22 02:35:11.427: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:35:13.430: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:35:13.431: INFO: Pod: liveness-http, restart count:0 Nov 22 02:35:15.434: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:35:15.434: INFO: Pod: liveness-http, restart count:0 Nov 22 02:35:17.437: INFO: Pod: liveness-http, restart count:0 Nov 22 02:35:17.437: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:35:19.445: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:35:19.445: INFO: Pod: liveness-http, restart count:0 Nov 22 02:35:21.449: INFO: Pod: liveness-http, restart count:0 Nov 22 02:35:21.449: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:35:23.452: INFO: Pod: liveness-http, restart count:0 Nov 22 02:35:23.452: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:35:25.454: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:35:25.455: INFO: Pod: liveness-http, restart count:0 Nov 22 02:35:27.459: INFO: Pod: liveness-http, restart count:0 Nov 22 02:35:27.459: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:35:29.462: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:35:29.462: INFO: Pod: liveness-http, restart count:0 Nov 22 02:35:31.466: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:35:31.466: INFO: Pod: liveness-http, restart count:0 Nov 22 02:35:33.469: INFO: Pod: liveness-http, restart count:0 Nov 22 02:35:33.469: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:35:35.472: INFO: Pod: liveness-http, restart count:1 Nov 22 02:35:35.472: INFO: Saw liveness-http restart, succeeded... Nov 22 02:35:35.472: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:35:37.476: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:35:39.480: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:35:41.484: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:35:43.487: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:35:45.490: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:35:47.493: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:35:49.497: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:35:51.501: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:35:53.503: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:35:55.506: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:35:57.512: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:35:59.516: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:36:01.522: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:36:03.525: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:36:05.528: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:36:07.533: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:36:09.538: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:36:11.543: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:36:13.547: INFO: Pod: liveness-exec, restart count:0 Nov 22 02:36:15.550: INFO: Pod: liveness-exec, restart count:1 Nov 22 02:36:15.550: INFO: Saw liveness-exec restart, succeeded... [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:36:15.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-7576" for this suite. • [SLOW TEST:95.085 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Liveness /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:66 liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Liveness liveness pods should be automatically restarted","total":-1,"completed":2,"skipped":107,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:35:46.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274 STEP: Creating pod liveness-0fda6854-286c-468c-aba5-4fa5a87a0220 in namespace container-probe-2419 Nov 22 02:35:52.414: INFO: Started pod liveness-0fda6854-286c-468c-aba5-4fa5a87a0220 in namespace container-probe-2419 STEP: checking the pod's current state and verifying that restartCount is present Nov 22 02:35:52.416: INFO: Initial restart count of pod liveness-0fda6854-286c-468c-aba5-4fa5a87a0220 is 0 Nov 22 02:36:18.471: INFO: Restart count of pod container-probe-2419/liveness-0fda6854-286c-468c-aba5-4fa5a87a0220 is now 1 (26.05437749s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:36:18.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2419" for this suite. • [SLOW TEST:32.107 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":6,"skipped":859,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:36:00.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:36:18.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5282" for this suite. • [SLOW TEST:18.112 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":4,"skipped":676,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:36:19.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:201 STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node STEP: Watching for error events or started pod STEP: Checking that the pod was rejected [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:36:21.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-1476" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]","total":-1,"completed":5,"skipped":750,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:36:21.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should update ConfigMap successfully /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/configmap.go:140 STEP: Creating ConfigMap configmap-8853/configmap-test-8b2c72f8-4039-4225-b43a-a5db781e7e0d STEP: Updating configMap configmap-8853/configmap-test-8b2c72f8-4039-4225-b43a-a5db781e7e0d STEP: Verifying update of ConfigMap configmap-8853/configmap-test-8b2c72f8-4039-4225-b43a-a5db781e7e0d [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:36:21.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8853" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":6,"skipped":920,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:36:22.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should create and update a lease in the kube-node-lease namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:50 STEP: check that lease for this Kubelet exists in the kube-node-lease namespace STEP: check that node lease is updated at least once within the lease duration [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:36:22.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-7403" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":7,"skipped":1170,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:36:15.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 Nov 22 02:36:15.072: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-13d3cb2c-6dbb-4b5f-b009-45c6cc954fe8" in namespace "security-context-test-65" to be "Succeeded or Failed" Nov 22 02:36:15.074: INFO: Pod "busybox-privileged-true-13d3cb2c-6dbb-4b5f-b009-45c6cc954fe8": Phase="Pending", Reason="", readiness=false. Elapsed: 1.837944ms Nov 22 02:36:17.077: INFO: Pod "busybox-privileged-true-13d3cb2c-6dbb-4b5f-b009-45c6cc954fe8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005192803s Nov 22 02:36:19.080: INFO: Pod "busybox-privileged-true-13d3cb2c-6dbb-4b5f-b009-45c6cc954fe8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008314265s Nov 22 02:36:21.084: INFO: Pod "busybox-privileged-true-13d3cb2c-6dbb-4b5f-b009-45c6cc954fe8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01268604s Nov 22 02:36:23.088: INFO: Pod "busybox-privileged-true-13d3cb2c-6dbb-4b5f-b009-45c6cc954fe8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016625085s Nov 22 02:36:25.091: INFO: Pod "busybox-privileged-true-13d3cb2c-6dbb-4b5f-b009-45c6cc954fe8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.019608807s Nov 22 02:36:25.091: INFO: Pod "busybox-privileged-true-13d3cb2c-6dbb-4b5f-b009-45c6cc954fe8" satisfied condition "Succeeded or Failed" Nov 22 02:36:25.102: INFO: Got logs for pod "busybox-privileged-true-13d3cb2c-6dbb-4b5f-b009-45c6cc954fe8": "" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:36:25.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-65" for this suite. • [SLOW TEST:10.073 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with privileged /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232 should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":7,"skipped":708,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:36:18.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Nov 22 02:36:18.535: INFO: Waiting up to 5m0s for pod "security-context-62af1952-565f-4aa5-b9f7-20552a8f2bce" in namespace "security-context-9157" to be "Succeeded or Failed" Nov 22 02:36:18.537: INFO: Pod "security-context-62af1952-565f-4aa5-b9f7-20552a8f2bce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14092ms Nov 22 02:36:20.541: INFO: Pod "security-context-62af1952-565f-4aa5-b9f7-20552a8f2bce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005289448s Nov 22 02:36:22.545: INFO: Pod "security-context-62af1952-565f-4aa5-b9f7-20552a8f2bce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009918691s Nov 22 02:36:24.549: INFO: Pod "security-context-62af1952-565f-4aa5-b9f7-20552a8f2bce": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013551549s Nov 22 02:36:26.552: INFO: Pod "security-context-62af1952-565f-4aa5-b9f7-20552a8f2bce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.016860171s STEP: Saw pod success Nov 22 02:36:26.552: INFO: Pod "security-context-62af1952-565f-4aa5-b9f7-20552a8f2bce" satisfied condition "Succeeded or Failed" Nov 22 02:36:26.555: INFO: Trying to get logs from node node2 pod security-context-62af1952-565f-4aa5-b9f7-20552a8f2bce container test-container: STEP: delete the pod Nov 22 02:36:26.623: INFO: Waiting for pod security-context-62af1952-565f-4aa5-b9f7-20552a8f2bce to disappear Nov 22 02:36:26.625: INFO: Pod security-context-62af1952-565f-4aa5-b9f7-20552a8f2bce no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:36:26.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-9157" for this suite. • [SLOW TEST:8.130 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":7,"skipped":866,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:36:22.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Nov 22 02:36:22.233: INFO: Waiting up to 5m0s for pod "security-context-2686a10b-6c07-4507-a041-8db3eb1950ef" in namespace "security-context-7017" to be "Succeeded or Failed" Nov 22 02:36:22.237: INFO: Pod "security-context-2686a10b-6c07-4507-a041-8db3eb1950ef": Phase="Pending", Reason="", readiness=false. Elapsed: 3.74783ms Nov 22 02:36:24.241: INFO: Pod "security-context-2686a10b-6c07-4507-a041-8db3eb1950ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007618844s Nov 22 02:36:26.245: INFO: Pod "security-context-2686a10b-6c07-4507-a041-8db3eb1950ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011641867s Nov 22 02:36:28.249: INFO: Pod "security-context-2686a10b-6c07-4507-a041-8db3eb1950ef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016493356s Nov 22 02:36:30.252: INFO: Pod "security-context-2686a10b-6c07-4507-a041-8db3eb1950ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.019107432s STEP: Saw pod success Nov 22 02:36:30.252: INFO: Pod "security-context-2686a10b-6c07-4507-a041-8db3eb1950ef" satisfied condition "Succeeded or Failed" Nov 22 02:36:30.254: INFO: Trying to get logs from node node1 pod security-context-2686a10b-6c07-4507-a041-8db3eb1950ef container test-container: STEP: delete the pod Nov 22 02:36:30.371: INFO: Waiting for pod security-context-2686a10b-6c07-4507-a041-8db3eb1950ef to disappear Nov 22 02:36:30.373: INFO: Pod security-context-2686a10b-6c07-4507-a041-8db3eb1950ef no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:36:30.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-7017" for this suite. • [SLOW TEST:8.178 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":8,"skipped":1232,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:35:53.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:274 [BeforeEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:295 [It] kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 STEP: Creating a RC of 20 pods and wait until all pods of this RC are running STEP: creating replication controller cleanup20-71098144-899c-4396-aa37-eeefc7fbc93d in namespace kubelet-8853 I1122 02:35:53.330198 26 runners.go:190] Created replication controller with name: cleanup20-71098144-899c-4396-aa37-eeefc7fbc93d, namespace: kubelet-8853, replica count: 20 I1122 02:36:03.381891 26 runners.go:190] cleanup20-71098144-899c-4396-aa37-eeefc7fbc93d Pods: 20 out of 20 created, 1 running, 19 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1122 02:36:13.382680 26 runners.go:190] cleanup20-71098144-899c-4396-aa37-eeefc7fbc93d Pods: 20 out of 20 created, 17 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1122 02:36:23.383482 26 runners.go:190] cleanup20-71098144-899c-4396-aa37-eeefc7fbc93d Pods: 20 out of 20 created, 20 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 22 02:36:24.384: INFO: Checking pods on node node2 via /runningpods endpoint Nov 22 02:36:24.384: INFO: Checking pods on node node1 via /runningpods endpoint Nov 22 02:36:24.417: INFO: Resource usage on node "node1": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 1.381 6374.84 2322.08 "runtime" 0.362 2550.88 541.88 "kubelet" 0.362 2550.88 541.88 Resource usage on node "node2": container cpu(cores) memory_working_set(MB) memory_rss(MB) "kubelet" 0.664 1568.00 557.23 "/" 1.112 4102.01 1205.76 "runtime" 0.664 1568.00 557.23 Resource usage on node "master1": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.440 5034.77 1731.83 "runtime" 0.129 671.45 271.27 "kubelet" 0.129 671.45 271.27 Resource usage on node "master2": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.394 3690.96 1590.85 "runtime" 0.099 598.81 254.56 "kubelet" 0.099 598.81 254.56 Resource usage on node "master3": container cpu(cores) memory_working_set(MB) memory_rss(MB) "kubelet" 0.100 531.14 248.14 "/" 0.350 3632.04 1577.90 "runtime" 0.100 531.14 248.14 STEP: Deleting the RC STEP: deleting ReplicationController cleanup20-71098144-899c-4396-aa37-eeefc7fbc93d in namespace kubelet-8853, will wait for the garbage collector to delete the pods Nov 22 02:36:24.474: INFO: Deleting ReplicationController cleanup20-71098144-899c-4396-aa37-eeefc7fbc93d took: 3.798182ms Nov 22 02:36:25.074: INFO: Terminating ReplicationController cleanup20-71098144-899c-4396-aa37-eeefc7fbc93d pods took: 600.325222ms Nov 22 02:36:43.176: INFO: Checking pods on node node2 via /runningpods endpoint Nov 22 02:36:43.176: INFO: Checking pods on node node1 via /runningpods endpoint Nov 22 02:36:43.192: INFO: Deleting 20 pods on 2 nodes completed in 1.017469227s after the RC was deleted Nov 22 02:36:43.192: INFO: CPU usage of containers on node "master1" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 0.493 0.493 0.648 0.648 0.648 "runtime" 0.000 0.000 0.107 0.116 0.116 0.116 0.116 "kubelet" 0.000 0.000 0.107 0.116 0.116 0.116 0.116 CPU usage of containers on node "master2" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 0.394 0.394 0.451 0.451 0.451 "runtime" 0.000 0.000 0.094 0.094 0.099 0.099 0.099 "kubelet" 0.000 0.000 0.094 0.094 0.099 0.099 0.099 CPU usage of containers on node "master3" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 0.370 0.370 0.495 0.495 0.495 "runtime" 0.000 0.000 0.100 0.103 0.103 0.103 0.103 "kubelet" 0.000 0.000 0.100 0.103 0.103 0.103 0.103 CPU usage of containers on node "node1" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 1.362 1.362 1.381 1.381 1.381 "runtime" 0.000 0.000 0.362 0.362 0.632 0.632 0.632 "kubelet" 0.000 0.000 0.362 0.362 0.632 0.632 0.632 CPU usage of containers on node "node2" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 1.411 1.411 1.776 1.776 1.776 "runtime" 0.000 0.000 0.664 0.989 0.989 0.989 0.989 "kubelet" 0.000 0.000 0.664 0.989 0.989 0.989 0.989 [AfterEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:326 STEP: removing the label kubelet_cleanup off the node node1 STEP: verifying the node doesn't have the label kubelet_cleanup STEP: removing the label kubelet_cleanup off the node node2 STEP: verifying the node doesn't have the label kubelet_cleanup [AfterEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:36:43.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-8853" for this suite. • [SLOW TEST:49.954 seconds] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:279 kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 ------------------------------ {"msg":"PASSED [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":4,"skipped":241,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:36:30.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 Nov 22 02:36:30.442: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-288f51df-d845-444f-9243-b8ab3ea7795f" in namespace "security-context-test-5612" to be "Succeeded or Failed" Nov 22 02:36:30.444: INFO: Pod "alpine-nnp-nil-288f51df-d845-444f-9243-b8ab3ea7795f": Phase="Pending", Reason="", readiness=false. Elapsed: 1.867635ms Nov 22 02:36:32.448: INFO: Pod "alpine-nnp-nil-288f51df-d845-444f-9243-b8ab3ea7795f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005235944s Nov 22 02:36:34.455: INFO: Pod "alpine-nnp-nil-288f51df-d845-444f-9243-b8ab3ea7795f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012279568s Nov 22 02:36:36.458: INFO: Pod "alpine-nnp-nil-288f51df-d845-444f-9243-b8ab3ea7795f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015851877s Nov 22 02:36:38.463: INFO: Pod "alpine-nnp-nil-288f51df-d845-444f-9243-b8ab3ea7795f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020137704s Nov 22 02:36:40.467: INFO: Pod "alpine-nnp-nil-288f51df-d845-444f-9243-b8ab3ea7795f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.02456074s Nov 22 02:36:42.474: INFO: Pod "alpine-nnp-nil-288f51df-d845-444f-9243-b8ab3ea7795f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.031087753s Nov 22 02:36:44.478: INFO: Pod "alpine-nnp-nil-288f51df-d845-444f-9243-b8ab3ea7795f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.035039818s Nov 22 02:36:46.484: INFO: Pod "alpine-nnp-nil-288f51df-d845-444f-9243-b8ab3ea7795f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.041633711s Nov 22 02:36:46.484: INFO: Pod "alpine-nnp-nil-288f51df-d845-444f-9243-b8ab3ea7795f" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:36:46.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5612" for this suite. • [SLOW TEST:16.084 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":9,"skipped":1246,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Nov 22 02:36:46.663: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:36:43.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:36:49.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1500" for this suite. • [SLOW TEST:6.081 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":5,"skipped":267,"failed":0} Nov 22 02:36:49.367: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:36:25.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should run a Pod requesting a RuntimeClass with scheduling without taints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:125 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a label on the found node. STEP: verifying the node has the label foo-e8d44c8d-5493-4cbd-8056-5c43d8e9ac22 bar STEP: verifying the node has the label fizz-21ded817-50c1-4e27-8822-3a2401b2beb0 buzz STEP: Trying to create runtimeclass and pod STEP: removing the label fizz-21ded817-50c1-4e27-8822-3a2401b2beb0 off the node node2 STEP: verifying the node doesn't have the label fizz-21ded817-50c1-4e27-8822-3a2401b2beb0 STEP: removing the label foo-e8d44c8d-5493-4cbd-8056-5c43d8e9ac22 off the node node2 STEP: verifying the node doesn't have the label foo-e8d44c8d-5493-4cbd-8056-5c43d8e9ac22 [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:36:49.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-9110" for this suite. • [SLOW TEST:24.123 seconds] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should run a Pod requesting a RuntimeClass with scheduling without taints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:125 ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling without taints ","total":-1,"completed":8,"skipped":1062,"failed":0} Nov 22 02:36:49.884: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:36:15.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:313 STEP: Creating pod startup-57796c95-4fc2-46ad-9a25-2a54e3b1c4ab in namespace container-probe-5039 Nov 22 02:36:20.004: INFO: Started pod startup-57796c95-4fc2-46ad-9a25-2a54e3b1c4ab in namespace container-probe-5039 STEP: checking the pod's current state and verifying that restartCount is present Nov 22 02:36:20.006: INFO: Initial restart count of pod startup-57796c95-4fc2-46ad-9a25-2a54e3b1c4ab is 0 Nov 22 02:37:28.160: INFO: Restart count of pod container-probe-5039/startup-57796c95-4fc2-46ad-9a25-2a54e3b1c4ab is now 1 (1m8.154535963s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:37:28.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5039" for this suite. • [SLOW TEST:72.207 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:313 ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:34:40.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:202 [It] should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 STEP: creating pods that should always exit 1 and terminating the pod after a random delay Nov 22 02:34:48.001: INFO: watch delete seen for pod-submit-status-1-0 Nov 22 02:34:48.001: INFO: Pod pod-submit-status-1-0 on node node2 timings total=7.211899346s t=86ms run=0s execute=0s Nov 22 02:34:48.720: INFO: watch delete seen for pod-submit-status-2-0 Nov 22 02:34:48.720: INFO: Pod pod-submit-status-2-0 on node node1 timings total=7.93041591s t=1.955s run=0s execute=0s Nov 22 02:34:55.467: INFO: watch delete seen for pod-submit-status-2-1 Nov 22 02:34:55.468: INFO: Pod pod-submit-status-2-1 on node node1 timings total=6.747603787s t=461ms run=0s execute=0s Nov 22 02:35:03.403: INFO: watch delete seen for pod-submit-status-1-1 Nov 22 02:35:03.403: INFO: Pod pod-submit-status-1-1 on node node2 timings total=15.401516373s t=1.759s run=0s execute=0s Nov 22 02:35:06.845: INFO: watch delete seen for pod-submit-status-0-0 Nov 22 02:35:06.845: INFO: Pod pod-submit-status-0-0 on node node2 timings total=26.055666557s t=1.949s run=0s execute=0s Nov 22 02:35:14.020: INFO: watch delete seen for pod-submit-status-2-2 Nov 22 02:35:14.020: INFO: Pod pod-submit-status-2-2 on node node1 timings total=18.552224577s t=543ms run=0s execute=0s Nov 22 02:35:14.065: INFO: watch delete seen for pod-submit-status-1-2 Nov 22 02:35:14.065: INFO: Pod pod-submit-status-1-2 on node node1 timings total=10.662068923s t=1.939s run=3s execute=0s Nov 22 02:35:14.375: INFO: watch delete seen for pod-submit-status-1-3 Nov 22 02:35:14.375: INFO: Pod pod-submit-status-1-3 on node node1 timings total=310.195677ms t=120ms run=0s execute=0s Nov 22 02:35:23.400: INFO: watch delete seen for pod-submit-status-2-3 Nov 22 02:35:23.400: INFO: Pod pod-submit-status-2-3 on node node2 timings total=9.380509669s t=1.221s run=0s execute=0s Nov 22 02:35:23.418: INFO: watch delete seen for pod-submit-status-1-4 Nov 22 02:35:23.418: INFO: Pod pod-submit-status-1-4 on node node1 timings total=9.042167267s t=1.046s run=0s execute=0s Nov 22 02:35:23.427: INFO: watch delete seen for pod-submit-status-0-1 Nov 22 02:35:23.427: INFO: Pod pod-submit-status-0-1 on node node1 timings total=16.581448326s t=1.307s run=0s execute=0s Nov 22 02:35:26.405: INFO: watch delete seen for pod-submit-status-0-2 Nov 22 02:35:26.406: INFO: Pod pod-submit-status-0-2 on node node1 timings total=2.978851187s t=1.177s run=0s execute=0s Nov 22 02:35:30.806: INFO: watch delete seen for pod-submit-status-2-4 Nov 22 02:35:30.806: INFO: Pod pod-submit-status-2-4 on node node1 timings total=7.405810927s t=1.896s run=0s execute=0s Nov 22 02:35:31.404: INFO: watch delete seen for pod-submit-status-1-5 Nov 22 02:35:31.405: INFO: Pod pod-submit-status-1-5 on node node1 timings total=7.986878096s t=1.086s run=0s execute=0s Nov 22 02:35:31.803: INFO: watch delete seen for pod-submit-status-0-3 Nov 22 02:35:31.803: INFO: Pod pod-submit-status-0-3 on node node1 timings total=5.397816123s t=493ms run=0s execute=0s Nov 22 02:35:39.751: INFO: watch delete seen for pod-submit-status-0-4 Nov 22 02:35:39.751: INFO: Pod pod-submit-status-0-4 on node node2 timings total=7.947754062s t=1.573s run=0s execute=0s Nov 22 02:35:43.402: INFO: watch delete seen for pod-submit-status-2-5 Nov 22 02:35:43.402: INFO: Pod pod-submit-status-2-5 on node node2 timings total=12.596144283s t=131ms run=0s execute=0s Nov 22 02:35:43.407: INFO: watch delete seen for pod-submit-status-1-6 Nov 22 02:35:43.407: INFO: Pod pod-submit-status-1-6 on node node1 timings total=12.002568307s t=1.135s run=0s execute=0s Nov 22 02:35:57.100: INFO: watch delete seen for pod-submit-status-2-6 Nov 22 02:35:57.100: INFO: Pod pod-submit-status-2-6 on node node2 timings total=13.697798826s t=536ms run=0s execute=0s Nov 22 02:36:00.900: INFO: watch delete seen for pod-submit-status-1-7 Nov 22 02:36:00.900: INFO: Pod pod-submit-status-1-7 on node node2 timings total=17.492575314s t=766ms run=0s execute=0s Nov 22 02:36:02.699: INFO: watch delete seen for pod-submit-status-2-7 Nov 22 02:36:02.699: INFO: Pod pod-submit-status-2-7 on node node2 timings total=5.598612925s t=468ms run=0s execute=0s Nov 22 02:36:08.298: INFO: watch delete seen for pod-submit-status-1-8 Nov 22 02:36:08.298: INFO: Pod pod-submit-status-1-8 on node node2 timings total=7.397807374s t=349ms run=0s execute=0s Nov 22 02:36:09.300: INFO: watch delete seen for pod-submit-status-2-8 Nov 22 02:36:09.300: INFO: Pod pod-submit-status-2-8 on node node2 timings total=6.601323265s t=1.174s run=0s execute=0s Nov 22 02:36:11.807: INFO: watch delete seen for pod-submit-status-2-9 Nov 22 02:36:11.807: INFO: Pod pod-submit-status-2-9 on node node1 timings total=2.507019422s t=1.191s run=0s execute=0s Nov 22 02:36:15.108: INFO: watch delete seen for pod-submit-status-1-9 Nov 22 02:36:15.108: INFO: Pod pod-submit-status-1-9 on node node2 timings total=6.810553654s t=51ms run=0s execute=0s Nov 22 02:36:16.300: INFO: watch delete seen for pod-submit-status-0-5 Nov 22 02:36:16.300: INFO: Pod pod-submit-status-0-5 on node node2 timings total=36.549184633s t=1.492s run=0s execute=0s Nov 22 02:36:23.919: INFO: watch delete seen for pod-submit-status-2-10 Nov 22 02:36:23.919: INFO: Pod pod-submit-status-2-10 on node node1 timings total=12.111128776s t=546ms run=0s execute=0s Nov 22 02:36:23.971: INFO: watch delete seen for pod-submit-status-1-10 Nov 22 02:36:23.971: INFO: Pod pod-submit-status-1-10 on node node1 timings total=8.862491257s t=1.691s run=0s execute=0s Nov 22 02:36:24.099: INFO: watch delete seen for pod-submit-status-0-6 Nov 22 02:36:24.100: INFO: Pod pod-submit-status-0-6 on node node2 timings total=7.79905003s t=1.563s run=3s execute=1s Nov 22 02:36:30.524: INFO: watch delete seen for pod-submit-status-2-11 Nov 22 02:36:30.524: INFO: Pod pod-submit-status-2-11 on node node2 timings total=6.605790996s t=1.254s run=0s execute=0s Nov 22 02:36:31.497: INFO: watch delete seen for pod-submit-status-0-7 Nov 22 02:36:31.498: INFO: Pod pod-submit-status-0-7 on node node2 timings total=7.397899393s t=1.575s run=0s execute=0s Nov 22 02:36:34.498: INFO: watch delete seen for pod-submit-status-1-11 Nov 22 02:36:34.498: INFO: Pod pod-submit-status-1-11 on node node2 timings total=10.527339842s t=294ms run=0s execute=0s Nov 22 02:36:36.101: INFO: watch delete seen for pod-submit-status-2-12 Nov 22 02:36:36.101: INFO: Pod pod-submit-status-2-12 on node node2 timings total=5.576716058s t=1.671s run=0s execute=0s Nov 22 02:36:37.879: INFO: watch delete seen for pod-submit-status-0-8 Nov 22 02:36:37.879: INFO: Pod pod-submit-status-0-8 on node node1 timings total=6.381173481s t=1.892s run=3s execute=0s Nov 22 02:36:41.281: INFO: watch delete seen for pod-submit-status-0-9 Nov 22 02:36:41.281: INFO: Pod pod-submit-status-0-9 on node node1 timings total=3.402021016s t=808ms run=0s execute=0s Nov 22 02:36:42.081: INFO: watch delete seen for pod-submit-status-2-13 Nov 22 02:36:42.082: INFO: Pod pod-submit-status-2-13 on node node1 timings total=5.980217782s t=347ms run=0s execute=0s Nov 22 02:36:43.498: INFO: watch delete seen for pod-submit-status-1-12 Nov 22 02:36:43.498: INFO: Pod pod-submit-status-1-12 on node node2 timings total=9.00016163s t=1.82s run=0s execute=0s Nov 22 02:36:47.679: INFO: watch delete seen for pod-submit-status-1-13 Nov 22 02:36:47.679: INFO: Pod pod-submit-status-1-13 on node node1 timings total=4.180235173s t=1.045s run=0s execute=0s Nov 22 02:36:53.407: INFO: watch delete seen for pod-submit-status-2-14 Nov 22 02:36:53.407: INFO: Pod pod-submit-status-2-14 on node node1 timings total=11.325867131s t=1.143s run=0s execute=0s Nov 22 02:36:53.416: INFO: watch delete seen for pod-submit-status-0-10 Nov 22 02:36:53.416: INFO: Pod pod-submit-status-0-10 on node node1 timings total=12.135114522s t=250ms run=0s execute=0s Nov 22 02:37:03.411: INFO: watch delete seen for pod-submit-status-1-14 Nov 22 02:37:03.411: INFO: Pod pod-submit-status-1-14 on node node1 timings total=15.732603852s t=328ms run=0s execute=0s Nov 22 02:37:03.421: INFO: watch delete seen for pod-submit-status-0-11 Nov 22 02:37:03.421: INFO: Pod pod-submit-status-0-11 on node node2 timings total=10.004864139s t=1.689s run=2s execute=0s Nov 22 02:37:13.413: INFO: watch delete seen for pod-submit-status-0-12 Nov 22 02:37:13.413: INFO: Pod pod-submit-status-0-12 on node node1 timings total=9.991867861s t=135ms run=0s execute=0s Nov 22 02:37:23.407: INFO: watch delete seen for pod-submit-status-0-13 Nov 22 02:37:23.407: INFO: Pod pod-submit-status-0-13 on node node2 timings total=9.994345755s t=151ms run=0s execute=0s Nov 22 02:37:33.403: INFO: watch delete seen for pod-submit-status-0-14 Nov 22 02:37:33.403: INFO: Pod pod-submit-status-0-14 on node node2 timings total=9.995948878s t=105ms run=0s execute=0s [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:37:33.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7223" for this suite. • [SLOW TEST:172.643 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:200 should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pod Container Status should never report success for a pending container","total":-1,"completed":3,"skipped":189,"failed":0} Nov 22 02:37:33.414: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:34:40.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:342 STEP: Creating pod startup-d0b71197-eae8-416c-8322-9e6b7ef9666f in namespace container-probe-2209 Nov 22 02:34:46.845: INFO: Started pod startup-d0b71197-eae8-416c-8322-9e6b7ef9666f in namespace container-probe-2209 STEP: checking the pod's current state and verifying that restartCount is present Nov 22 02:34:46.847: INFO: Initial restart count of pod startup-d0b71197-eae8-416c-8322-9e6b7ef9666f is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:38:47.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2209" for this suite. • [SLOW TEST:246.558 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:342 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it","total":-1,"completed":2,"skipped":208,"failed":0} Nov 22 02:38:47.370: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:35:01.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 STEP: Creating pod liveness-0b32a62f-a1cf-4820-bbea-bbc6ba914360 in namespace container-probe-9326 Nov 22 02:35:07.859: INFO: Started pod liveness-0b32a62f-a1cf-4820-bbea-bbc6ba914360 in namespace container-probe-9326 STEP: checking the pod's current state and verifying that restartCount is present Nov 22 02:35:07.862: INFO: Initial restart count of pod liveness-0b32a62f-a1cf-4820-bbea-bbc6ba914360 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:39:08.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9326" for this suite. • [SLOW TEST:246.566 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe","total":-1,"completed":7,"skipped":562,"failed":0} Nov 22 02:39:08.385: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:35:23.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:681 Nov 22 02:35:23.738: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Nov 22 02:35:25.741: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Nov 22 02:35:27.743: INFO: The status of Pod pod-back-off-image is Running (Ready = true) STEP: getting restart delay-0 Nov 22 02:36:45.818: INFO: getRestartDelay: restartCount = 3, finishedAt=2021-11-22 02:35:59 +0000 UTC restartedAt=2021-11-22 02:36:33 +0000 UTC (34s) STEP: getting restart delay-1 Nov 22 02:37:22.967: INFO: getRestartDelay: restartCount = 4, finishedAt=2021-11-22 02:36:38 +0000 UTC restartedAt=2021-11-22 02:37:22 +0000 UTC (44s) STEP: getting restart delay-2 Nov 22 02:38:52.302: INFO: getRestartDelay: restartCount = 5, finishedAt=2021-11-22 02:37:27 +0000 UTC restartedAt=2021-11-22 02:38:50 +0000 UTC (1m23s) STEP: updating the image Nov 22 02:38:52.812: INFO: Successfully updated pod "pod-back-off-image" STEP: get restart delay after image update Nov 22 02:39:18.876: INFO: getRestartDelay: restartCount = 7, finishedAt=2021-11-22 02:39:01 +0000 UTC restartedAt=2021-11-22 02:39:17 +0000 UTC (16s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:39:18.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7135" for this suite. • [SLOW TEST:235.180 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:681 ------------------------------ {"msg":"PASSED [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]","total":-1,"completed":4,"skipped":628,"failed":0} Nov 22 02:39:18.886: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:35:26.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 STEP: wait until node is ready Nov 22 02:35:26.161: INFO: Waiting up to 5m0s for node node2 condition Ready to be true STEP: wait until there is node lease STEP: verify NodeStatus report period is longer than lease duration Nov 22 02:35:27.174: INFO: node status heartbeat is unchanged for 1.005056569s, waiting for 1m20s Nov 22 02:35:28.174: INFO: node status heartbeat is unchanged for 2.00493889s, waiting for 1m20s Nov 22 02:35:29.173: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 22 02:35:29.177: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:35:18 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:35:28 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:35:18 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:35:28 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:35:18 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:35:28 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-21 22:29:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 22 02:35:30.173: INFO: node status heartbeat is unchanged for 1.000877031s, waiting for 1m20s Nov 22 02:35:31.175: INFO: node status heartbeat is unchanged for 2.002625454s, waiting for 1m20s Nov 22 02:35:32.175: INFO: node status heartbeat is unchanged for 3.00256526s, waiting for 1m20s Nov 22 02:35:33.173: INFO: node status heartbeat is unchanged for 4.000714821s, waiting for 1m20s Nov 22 02:35:34.173: INFO: node status heartbeat is unchanged for 5.000951153s, waiting for 1m20s Nov 22 02:35:35.173: INFO: node status heartbeat is unchanged for 6.001023871s, waiting for 1m20s Nov 22 02:35:36.173: INFO: node status heartbeat is unchanged for 7.000319529s, waiting for 1m20s Nov 22 02:35:37.173: INFO: node status heartbeat is unchanged for 8.000159831s, waiting for 1m20s Nov 22 02:35:38.174: INFO: node status heartbeat is unchanged for 9.00179616s, waiting for 1m20s Nov 22 02:35:39.173: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 22 02:35:39.178: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:35:28 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:35:38 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:35:28 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:35:38 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:35:28 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:35:38 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-21 22:29:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 22 02:35:40.173: INFO: node status heartbeat is unchanged for 999.503791ms, waiting for 1m20s Nov 22 02:35:41.175: INFO: node status heartbeat is unchanged for 2.002035062s, waiting for 1m20s Nov 22 02:35:42.173: INFO: node status heartbeat is unchanged for 2.999531299s, waiting for 1m20s Nov 22 02:35:43.176: INFO: node status heartbeat is unchanged for 4.002850071s, waiting for 1m20s Nov 22 02:35:44.175: INFO: node status heartbeat is unchanged for 5.001736826s, waiting for 1m20s Nov 22 02:35:45.172: INFO: node status heartbeat is unchanged for 5.999308226s, waiting for 1m20s Nov 22 02:35:46.175: INFO: node status heartbeat is unchanged for 7.002232432s, waiting for 1m20s Nov 22 02:35:47.175: INFO: node status heartbeat is unchanged for 8.002350909s, waiting for 1m20s Nov 22 02:35:48.175: INFO: node status heartbeat is unchanged for 9.002368656s, waiting for 1m20s Nov 22 02:35:49.175: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 22 02:35:49.179: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:35:38 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:35:48 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:35:38 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:35:48 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:35:38 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:35:48 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-21 22:29:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 22 02:35:50.173: INFO: node status heartbeat is unchanged for 998.700414ms, waiting for 1m20s Nov 22 02:35:51.175: INFO: node status heartbeat is unchanged for 1.999930339s, waiting for 1m20s Nov 22 02:35:52.174: INFO: node status heartbeat is unchanged for 2.999763418s, waiting for 1m20s Nov 22 02:35:53.176: INFO: node status heartbeat is unchanged for 4.001292534s, waiting for 1m20s Nov 22 02:35:54.174: INFO: node status heartbeat is unchanged for 4.999562687s, waiting for 1m20s Nov 22 02:35:55.174: INFO: node status heartbeat is unchanged for 5.998876563s, waiting for 1m20s Nov 22 02:35:56.174: INFO: node status heartbeat is unchanged for 6.999325579s, waiting for 1m20s Nov 22 02:35:57.173: INFO: node status heartbeat is unchanged for 7.998643612s, waiting for 1m20s Nov 22 02:35:58.173: INFO: node status heartbeat is unchanged for 8.998476936s, waiting for 1m20s Nov 22 02:35:59.173: INFO: node status heartbeat is unchanged for 9.998041371s, waiting for 1m20s Nov 22 02:36:00.173: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s Nov 22 02:36:00.178: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:35:48 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:35:59 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:35:48 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:35:59 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:35:48 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:35:59 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-21 22:29:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 22 02:36:01.173: INFO: node status heartbeat is unchanged for 1.000043079s, waiting for 1m20s Nov 22 02:36:02.174: INFO: node status heartbeat is unchanged for 2.000463461s, waiting for 1m20s Nov 22 02:36:03.172: INFO: node status heartbeat is unchanged for 2.999287655s, waiting for 1m20s Nov 22 02:36:04.173: INFO: node status heartbeat is unchanged for 3.999508152s, waiting for 1m20s Nov 22 02:36:05.173: INFO: node status heartbeat is unchanged for 4.999561443s, waiting for 1m20s Nov 22 02:36:06.175: INFO: node status heartbeat is unchanged for 6.001705318s, waiting for 1m20s Nov 22 02:36:07.174: INFO: node status heartbeat is unchanged for 7.000977048s, waiting for 1m20s Nov 22 02:36:08.176: INFO: node status heartbeat is unchanged for 8.002693445s, waiting for 1m20s Nov 22 02:36:09.173: INFO: node status heartbeat is unchanged for 8.999660721s, waiting for 1m20s Nov 22 02:36:10.173: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 22 02:36:10.177: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:35:59 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:36:09 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:35:59 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:36:09 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:35:59 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:36:09 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-21 22:29:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    NodeInfo: {MachineID: "f7ac7a5a4fa14ecb963cd8859464e44b", SystemUUID: "80B3CD56-852F-E711-906E-0017A4403562", BootID: "4c94afbc-9699-4a1a-a85f-52972142831b", KernelVersion: "3.10.0-1160.45.1.el7.x86_64", ...},    Images: []v1.ContainerImage{    ... // 23 identical elements    {Names: {"k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d"..., "k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2"}, SizeBytes: 44576952},    {Names: {"localhost:30500/sriov-device-plugin@sha256:fa923f38831d8c20c7c80"..., "localhost:30500/sriov-device-plugin:v3.3.2"}, SizeBytes: 42686989}, +  { +  Names: []string{ +  "k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34d"..., +  "k8s.gcr.io/e2e-test-images/nonroot:1.1", +  }, +  SizeBytes: 42321438, +  },    {Names: {"localhost:30500/tasextender@sha256:d8832dc123d295a3bf913b43c6f72"..., "localhost:30500/tasextender:0.4"}, SizeBytes: 28910791},    {Names: {"quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72"..., "quay.io/prometheus/node-exporter:v1.0.1"}, SizeBytes: 26430341},    ... // 10 identical elements    },    VolumesInUse: nil,    VolumesAttached: nil,    Config: nil,   } Nov 22 02:36:11.175: INFO: node status heartbeat is unchanged for 1.002293249s, waiting for 1m20s Nov 22 02:36:12.174: INFO: node status heartbeat is unchanged for 2.001980404s, waiting for 1m20s Nov 22 02:36:13.173: INFO: node status heartbeat is unchanged for 3.000785092s, waiting for 1m20s Nov 22 02:36:14.173: INFO: node status heartbeat is unchanged for 4.000637728s, waiting for 1m20s Nov 22 02:36:15.172: INFO: node status heartbeat is unchanged for 4.99951592s, waiting for 1m20s Nov 22 02:36:16.173: INFO: node status heartbeat is unchanged for 6.000207239s, waiting for 1m20s Nov 22 02:36:17.173: INFO: node status heartbeat is unchanged for 7.000122713s, waiting for 1m20s Nov 22 02:36:18.175: INFO: node status heartbeat is unchanged for 8.002734731s, waiting for 1m20s Nov 22 02:36:19.173: INFO: node status heartbeat is unchanged for 9.000220924s, waiting for 1m20s Nov 22 02:36:20.173: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 22 02:36:20.178: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:36:09 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:36:19 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:36:09 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:36:19 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:36:09 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:36:19 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-21 22:29:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 22 02:36:21.173: INFO: node status heartbeat is unchanged for 999.619476ms, waiting for 1m20s Nov 22 02:36:22.173: INFO: node status heartbeat is unchanged for 1.999618848s, waiting for 1m20s Nov 22 02:36:23.173: INFO: node status heartbeat is unchanged for 2.999480304s, waiting for 1m20s Nov 22 02:36:24.173: INFO: node status heartbeat is unchanged for 3.999263867s, waiting for 1m20s Nov 22 02:36:25.173: INFO: node status heartbeat is unchanged for 4.99976553s, waiting for 1m20s Nov 22 02:36:26.174: INFO: node status heartbeat is unchanged for 6.000229198s, waiting for 1m20s Nov 22 02:36:27.173: INFO: node status heartbeat is unchanged for 6.999580633s, waiting for 1m20s Nov 22 02:36:28.173: INFO: node status heartbeat is unchanged for 7.999920029s, waiting for 1m20s Nov 22 02:36:29.173: INFO: node status heartbeat is unchanged for 8.999873402s, waiting for 1m20s Nov 22 02:36:30.173: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 22 02:36:30.178: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:36:19 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:36:29 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:36:19 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:36:29 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:36:19 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:36:29 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-21 22:29:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 22 02:36:31.172: INFO: node status heartbeat is unchanged for 999.06917ms, waiting for 1m20s Nov 22 02:36:32.173: INFO: node status heartbeat is unchanged for 2.000277928s, waiting for 1m20s Nov 22 02:36:33.174: INFO: node status heartbeat is unchanged for 3.001307442s, waiting for 1m20s Nov 22 02:36:34.174: INFO: node status heartbeat is unchanged for 4.000630042s, waiting for 1m20s Nov 22 02:36:35.174: INFO: node status heartbeat is unchanged for 5.000454927s, waiting for 1m20s Nov 22 02:36:36.173: INFO: node status heartbeat is unchanged for 5.999617739s, waiting for 1m20s Nov 22 02:36:37.176: INFO: node status heartbeat is unchanged for 7.002586746s, waiting for 1m20s Nov 22 02:36:38.173: INFO: node status heartbeat is unchanged for 8.000115847s, waiting for 1m20s Nov 22 02:36:39.173: INFO: node status heartbeat is unchanged for 9.000161309s, waiting for 1m20s Nov 22 02:36:40.173: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 22 02:36:40.177: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:36:29 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:36:39 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:36:29 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:36:39 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:36:29 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:36:39 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-21 22:29:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 22 02:36:41.173: INFO: node status heartbeat is unchanged for 999.91179ms, waiting for 1m20s Nov 22 02:36:42.174: INFO: node status heartbeat is unchanged for 2.001644652s, waiting for 1m20s Nov 22 02:36:43.174: INFO: node status heartbeat is unchanged for 3.001858608s, waiting for 1m20s Nov 22 02:36:44.175: INFO: node status heartbeat is unchanged for 4.002247861s, waiting for 1m20s Nov 22 02:36:45.173: INFO: node status heartbeat is unchanged for 5.000491037s, waiting for 1m20s Nov 22 02:36:46.174: INFO: node status heartbeat is unchanged for 6.00146087s, waiting for 1m20s Nov 22 02:36:47.174: INFO: node status heartbeat is unchanged for 7.001497286s, waiting for 1m20s Nov 22 02:36:48.173: INFO: node status heartbeat is unchanged for 8.000733157s, waiting for 1m20s Nov 22 02:36:49.173: INFO: node status heartbeat is unchanged for 9.000319233s, waiting for 1m20s Nov 22 02:36:50.173: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 22 02:36:50.177: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:36:39 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:36:49 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:36:39 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:36:49 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:36:39 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:36:49 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-21 22:29:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 22 02:36:51.175: INFO: node status heartbeat is unchanged for 1.002156989s, waiting for 1m20s Nov 22 02:36:52.175: INFO: node status heartbeat is unchanged for 2.002130438s, waiting for 1m20s Nov 22 02:36:53.175: INFO: node status heartbeat is unchanged for 3.002857688s, waiting for 1m20s Nov 22 02:36:54.173: INFO: node status heartbeat is unchanged for 4.000807695s, waiting for 1m20s Nov 22 02:36:55.173: INFO: node status heartbeat is unchanged for 5.000650828s, waiting for 1m20s Nov 22 02:36:56.173: INFO: node status heartbeat is unchanged for 6.001055763s, waiting for 1m20s Nov 22 02:36:57.175: INFO: node status heartbeat is unchanged for 7.002175401s, waiting for 1m20s Nov 22 02:36:58.174: INFO: node status heartbeat is unchanged for 8.001970838s, waiting for 1m20s Nov 22 02:36:59.174: INFO: node status heartbeat is unchanged for 9.001230798s, waiting for 1m20s Nov 22 02:37:00.173: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 22 02:37:00.178: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:36:49 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:36:59 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:36:49 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:36:59 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:36:49 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:36:59 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-21 22:29:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 22 02:37:01.175: INFO: node status heartbeat is unchanged for 1.001655436s, waiting for 1m20s Nov 22 02:37:02.174: INFO: node status heartbeat is unchanged for 2.000305818s, waiting for 1m20s Nov 22 02:37:03.176: INFO: node status heartbeat is unchanged for 3.002375687s, waiting for 1m20s Nov 22 02:37:04.174: INFO: node status heartbeat is unchanged for 4.000704375s, waiting for 1m20s Nov 22 02:37:05.174: INFO: node status heartbeat is unchanged for 5.000308764s, waiting for 1m20s Nov 22 02:37:06.173: INFO: node status heartbeat is unchanged for 6.000105359s, waiting for 1m20s Nov 22 02:37:07.175: INFO: node status heartbeat is unchanged for 7.001871523s, waiting for 1m20s Nov 22 02:37:08.176: INFO: node status heartbeat is unchanged for 8.002527622s, waiting for 1m20s Nov 22 02:37:09.175: INFO: node status heartbeat is unchanged for 9.001220063s, waiting for 1m20s Nov 22 02:37:10.173: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 22 02:37:10.178: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:36:59 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:37:09 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:36:59 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:37:09 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:36:59 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:37:09 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-21 22:29:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 22 02:37:11.176: INFO: node status heartbeat is unchanged for 1.002948282s, waiting for 1m20s Nov 22 02:37:12.175: INFO: node status heartbeat is unchanged for 2.0016098s, waiting for 1m20s Nov 22 02:37:13.176: INFO: node status heartbeat is unchanged for 3.002437997s, waiting for 1m20s Nov 22 02:37:14.176: INFO: node status heartbeat is unchanged for 4.00241204s, waiting for 1m20s Nov 22 02:37:15.173: INFO: node status heartbeat is unchanged for 4.99969126s, waiting for 1m20s Nov 22 02:37:16.176: INFO: node status heartbeat is unchanged for 6.002546831s, waiting for 1m20s Nov 22 02:37:17.175: INFO: node status heartbeat is unchanged for 7.002277172s, waiting for 1m20s Nov 22 02:37:18.176: INFO: node status heartbeat is unchanged for 8.002586677s, waiting for 1m20s Nov 22 02:37:19.174: INFO: node status heartbeat is unchanged for 9.000451935s, waiting for 1m20s Nov 22 02:37:20.174: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 22 02:37:20.178: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:37:09 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:37:19 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:37:09 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:37:19 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:37:09 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:37:19 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-21 22:29:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 22 02:37:21.173: INFO: node status heartbeat is unchanged for 999.398043ms, waiting for 1m20s Nov 22 02:37:22.173: INFO: node status heartbeat is unchanged for 1.999588834s, waiting for 1m20s Nov 22 02:37:23.175: INFO: node status heartbeat is unchanged for 3.001787292s, waiting for 1m20s Nov 22 02:37:24.175: INFO: node status heartbeat is unchanged for 4.000893355s, waiting for 1m20s Nov 22 02:37:25.173: INFO: node status heartbeat is unchanged for 4.99939084s, waiting for 1m20s Nov 22 02:37:26.175: INFO: node status heartbeat is unchanged for 6.001783651s, waiting for 1m20s Nov 22 02:37:27.177: INFO: node status heartbeat is unchanged for 7.002895564s, waiting for 1m20s Nov 22 02:37:28.173: INFO: node status heartbeat is unchanged for 7.999303826s, waiting for 1m20s Nov 22 02:37:29.173: INFO: node status heartbeat is unchanged for 8.998829876s, waiting for 1m20s Nov 22 02:37:30.174: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 22 02:37:30.178: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:37:19 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:37:29 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:37:19 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:37:29 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:37:19 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:37:29 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-21 22:29:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 22 02:37:31.175: INFO: node status heartbeat is unchanged for 1.001428058s, waiting for 1m20s Nov 22 02:37:32.173: INFO: node status heartbeat is unchanged for 1.99979569s, waiting for 1m20s Nov 22 02:37:33.173: INFO: node status heartbeat is unchanged for 2.999391816s, waiting for 1m20s Nov 22 02:37:34.174: INFO: node status heartbeat is unchanged for 4.000063775s, waiting for 1m20s Nov 22 02:37:35.172: INFO: node status heartbeat is unchanged for 4.998959145s, waiting for 1m20s Nov 22 02:37:36.173: INFO: node status heartbeat is unchanged for 5.999829584s, waiting for 1m20s Nov 22 02:37:37.174: INFO: node status heartbeat is unchanged for 7.000539374s, waiting for 1m20s Nov 22 02:37:38.173: INFO: node status heartbeat is unchanged for 7.99993922s, waiting for 1m20s Nov 22 02:37:39.173: INFO: node status heartbeat is unchanged for 8.999831795s, waiting for 1m20s Nov 22 02:37:40.173: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 22 02:37:40.178: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:37:29 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:37:39 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:37:29 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:37:39 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:37:29 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:37:39 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-21 22:29:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 22 02:37:41.174: INFO: node status heartbeat is unchanged for 1.000710797s, waiting for 1m20s Nov 22 02:37:42.173: INFO: node status heartbeat is unchanged for 1.99955557s, waiting for 1m20s Nov 22 02:37:43.173: INFO: node status heartbeat is unchanged for 3.000186053s, waiting for 1m20s Nov 22 02:37:44.173: INFO: node status heartbeat is unchanged for 3.999560979s, waiting for 1m20s Nov 22 02:37:45.174: INFO: node status heartbeat is unchanged for 5.00045003s, waiting for 1m20s Nov 22 02:37:46.172: INFO: node status heartbeat is unchanged for 5.999216746s, waiting for 1m20s Nov 22 02:37:47.176: INFO: node status heartbeat is unchanged for 7.002497852s, waiting for 1m20s Nov 22 02:37:48.173: INFO: node status heartbeat is unchanged for 7.999876568s, waiting for 1m20s Nov 22 02:37:49.172: INFO: node status heartbeat is unchanged for 8.999251586s, waiting for 1m20s Nov 22 02:37:50.173: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 22 02:37:50.178: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:37:39 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:37:49 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:37:39 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:37:49 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:37:39 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:37:49 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-21 22:29:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 22 02:37:51.174: INFO: node status heartbeat is unchanged for 1.000700827s, waiting for 1m20s Nov 22 02:37:52.173: INFO: node status heartbeat is unchanged for 2.000387853s, waiting for 1m20s Nov 22 02:37:53.174: INFO: node status heartbeat is unchanged for 3.000749519s, waiting for 1m20s Nov 22 02:37:54.173: INFO: node status heartbeat is unchanged for 3.999681426s, waiting for 1m20s Nov 22 02:37:55.173: INFO: node status heartbeat is unchanged for 4.999777345s, waiting for 1m20s Nov 22 02:37:56.173: INFO: node status heartbeat is unchanged for 6.000028619s, waiting for 1m20s Nov 22 02:37:57.173: INFO: node status heartbeat is unchanged for 7.000368278s, waiting for 1m20s Nov 22 02:37:58.175: INFO: node status heartbeat is unchanged for 8.002362646s, waiting for 1m20s Nov 22 02:37:59.175: INFO: node status heartbeat is unchanged for 9.001771009s, waiting for 1m20s Nov 22 02:38:00.173: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 22 02:38:00.177: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:37:49 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:37:59 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:37:49 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:37:59 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:37:49 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:37:59 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-21 22:29:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 22 02:38:01.174: INFO: node status heartbeat is unchanged for 1.00104499s, waiting for 1m20s Nov 22 02:38:02.176: INFO: node status heartbeat is unchanged for 2.003272391s, waiting for 1m20s Nov 22 02:38:03.175: INFO: node status heartbeat is unchanged for 3.002400512s, waiting for 1m20s Nov 22 02:38:04.175: INFO: node status heartbeat is unchanged for 4.002390444s, waiting for 1m20s Nov 22 02:38:05.172: INFO: node status heartbeat is unchanged for 4.999105594s, waiting for 1m20s Nov 22 02:38:06.174: INFO: node status heartbeat is unchanged for 6.001268454s, waiting for 1m20s Nov 22 02:38:07.175: INFO: node status heartbeat is unchanged for 7.002143456s, waiting for 1m20s Nov 22 02:38:08.175: INFO: node status heartbeat is unchanged for 8.002181003s, waiting for 1m20s Nov 22 02:38:09.174: INFO: node status heartbeat is unchanged for 9.001801207s, waiting for 1m20s Nov 22 02:38:10.173: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s Nov 22 02:38:10.177: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:37:59 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:38:10 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:37:59 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:38:10 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:37:59 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:38:10 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-21 22:29:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 22 02:38:11.176: INFO: node status heartbeat is unchanged for 1.002612834s, waiting for 1m20s Nov 22 02:38:12.174: INFO: node status heartbeat is unchanged for 2.000825212s, waiting for 1m20s Nov 22 02:38:13.174: INFO: node status heartbeat is unchanged for 3.001353963s, waiting for 1m20s Nov 22 02:38:14.175: INFO: node status heartbeat is unchanged for 4.002457441s, waiting for 1m20s Nov 22 02:38:15.172: INFO: node status heartbeat is unchanged for 4.999263198s, waiting for 1m20s Nov 22 02:38:16.175: INFO: node status heartbeat is unchanged for 6.002395887s, waiting for 1m20s Nov 22 02:38:17.176: INFO: node status heartbeat is unchanged for 7.002609161s, waiting for 1m20s Nov 22 02:38:18.174: INFO: node status heartbeat is unchanged for 8.001457118s, waiting for 1m20s Nov 22 02:38:19.172: INFO: node status heartbeat is unchanged for 8.99925369s, waiting for 1m20s Nov 22 02:38:20.173: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 22 02:38:20.178: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:38:10 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:38:20 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:38:10 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:38:20 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:38:10 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:38:20 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-21 22:29:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 22 02:38:21.177: INFO: node status heartbeat is unchanged for 1.003264614s, waiting for 1m20s Nov 22 02:38:22.175: INFO: node status heartbeat is unchanged for 2.001587571s, waiting for 1m20s Nov 22 02:38:23.175: INFO: node status heartbeat is unchanged for 3.001818548s, waiting for 1m20s Nov 22 02:38:24.175: INFO: node status heartbeat is unchanged for 4.001924697s, waiting for 1m20s Nov 22 02:38:25.173: INFO: node status heartbeat is unchanged for 4.999874951s, waiting for 1m20s Nov 22 02:38:26.173: INFO: node status heartbeat is unchanged for 5.999141528s, waiting for 1m20s Nov 22 02:38:27.176: INFO: node status heartbeat is unchanged for 7.002656474s, waiting for 1m20s Nov 22 02:38:28.176: INFO: node status heartbeat is unchanged for 8.002184245s, waiting for 1m20s Nov 22 02:38:29.174: INFO: node status heartbeat is unchanged for 9.000925613s, waiting for 1m20s Nov 22 02:38:30.173: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 22 02:38:30.178: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:38:20 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:38:30 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:38:20 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:38:30 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:38:20 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:38:30 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-21 22:29:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 22 02:38:31.178: INFO: node status heartbeat is unchanged for 1.004663406s, waiting for 1m20s Nov 22 02:38:32.176: INFO: node status heartbeat is unchanged for 2.002627391s, waiting for 1m20s Nov 22 02:38:33.174: INFO: node status heartbeat is unchanged for 3.000846487s, waiting for 1m20s Nov 22 02:38:34.175: INFO: node status heartbeat is unchanged for 4.001799063s, waiting for 1m20s Nov 22 02:38:35.174: INFO: node status heartbeat is unchanged for 5.000851432s, waiting for 1m20s Nov 22 02:38:36.174: INFO: node status heartbeat is unchanged for 6.000927822s, waiting for 1m20s Nov 22 02:38:37.174: INFO: node status heartbeat is unchanged for 7.000887185s, waiting for 1m20s Nov 22 02:38:38.175: INFO: node status heartbeat is unchanged for 8.002279728s, waiting for 1m20s Nov 22 02:38:39.175: INFO: node status heartbeat is unchanged for 9.001776089s, waiting for 1m20s Nov 22 02:38:40.173: INFO: node status heartbeat is unchanged for 10.000389092s, waiting for 1m20s Nov 22 02:38:41.174: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 22 02:38:41.178: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:38:30 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:38:40 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:38:30 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:38:40 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:38:30 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:38:40 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-21 22:29:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 22 02:38:42.177: INFO: node status heartbeat is unchanged for 1.002837828s, waiting for 1m20s Nov 22 02:38:43.174: INFO: node status heartbeat is unchanged for 2.000728906s, waiting for 1m20s Nov 22 02:38:44.174: INFO: node status heartbeat is unchanged for 2.99985595s, waiting for 1m20s Nov 22 02:38:45.173: INFO: node status heartbeat is unchanged for 3.99905381s, waiting for 1m20s Nov 22 02:38:46.173: INFO: node status heartbeat is unchanged for 4.999318856s, waiting for 1m20s Nov 22 02:38:47.173: INFO: node status heartbeat is unchanged for 5.999388078s, waiting for 1m20s Nov 22 02:38:48.173: INFO: node status heartbeat is unchanged for 6.998901966s, waiting for 1m20s Nov 22 02:38:49.175: INFO: node status heartbeat is unchanged for 8.000910093s, waiting for 1m20s Nov 22 02:38:50.173: INFO: node status heartbeat is unchanged for 8.99957664s, waiting for 1m20s Nov 22 02:38:51.174: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 22 02:38:51.178: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:38:40 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:38:50 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:38:40 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:38:50 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:38:40 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:38:50 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-21 22:29:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 22 02:38:52.173: INFO: node status heartbeat is unchanged for 998.944337ms, waiting for 1m20s Nov 22 02:38:53.173: INFO: node status heartbeat is unchanged for 1.99883527s, waiting for 1m20s Nov 22 02:38:54.176: INFO: node status heartbeat is unchanged for 3.001887801s, waiting for 1m20s Nov 22 02:38:55.173: INFO: node status heartbeat is unchanged for 3.99948018s, waiting for 1m20s Nov 22 02:38:56.173: INFO: node status heartbeat is unchanged for 4.998998262s, waiting for 1m20s Nov 22 02:38:57.173: INFO: node status heartbeat is unchanged for 5.998927362s, waiting for 1m20s Nov 22 02:38:58.176: INFO: node status heartbeat is unchanged for 7.001848073s, waiting for 1m20s Nov 22 02:38:59.176: INFO: node status heartbeat is unchanged for 8.002012418s, waiting for 1m20s Nov 22 02:39:00.174: INFO: node status heartbeat is unchanged for 9.000413608s, waiting for 1m20s Nov 22 02:39:01.173: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 22 02:39:01.178: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:38:50 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:39:00 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:38:50 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:39:00 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:38:50 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:39:00 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-21 22:29:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 22 02:39:02.174: INFO: node status heartbeat is unchanged for 1.001210875s, waiting for 1m20s Nov 22 02:39:03.174: INFO: node status heartbeat is unchanged for 2.000810006s, waiting for 1m20s Nov 22 02:39:04.174: INFO: node status heartbeat is unchanged for 3.000988804s, waiting for 1m20s Nov 22 02:39:05.174: INFO: node status heartbeat is unchanged for 4.000430296s, waiting for 1m20s Nov 22 02:39:06.174: INFO: node status heartbeat is unchanged for 5.000382079s, waiting for 1m20s Nov 22 02:39:07.174: INFO: node status heartbeat is unchanged for 6.000470088s, waiting for 1m20s Nov 22 02:39:08.174: INFO: node status heartbeat is unchanged for 7.000342733s, waiting for 1m20s Nov 22 02:39:09.174: INFO: node status heartbeat is unchanged for 8.000945797s, waiting for 1m20s Nov 22 02:39:10.173: INFO: node status heartbeat is unchanged for 9.000086326s, waiting for 1m20s Nov 22 02:39:11.176: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 22 02:39:11.180: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:39:00 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:39:10 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:39:00 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:39:10 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:39:00 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:39:10 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-21 22:29:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 22 02:39:12.176: INFO: node status heartbeat is unchanged for 1.000039155s, waiting for 1m20s Nov 22 02:39:13.175: INFO: node status heartbeat is unchanged for 1.999079096s, waiting for 1m20s Nov 22 02:39:14.175: INFO: node status heartbeat is unchanged for 2.999248904s, waiting for 1m20s Nov 22 02:39:15.174: INFO: node status heartbeat is unchanged for 3.998023135s, waiting for 1m20s Nov 22 02:39:16.173: INFO: node status heartbeat is unchanged for 4.997230384s, waiting for 1m20s Nov 22 02:39:17.173: INFO: node status heartbeat is unchanged for 5.99721756s, waiting for 1m20s Nov 22 02:39:18.173: INFO: node status heartbeat is unchanged for 6.997283903s, waiting for 1m20s Nov 22 02:39:19.173: INFO: node status heartbeat is unchanged for 7.997022645s, waiting for 1m20s Nov 22 02:39:20.174: INFO: node status heartbeat is unchanged for 8.998459459s, waiting for 1m20s Nov 22 02:39:21.173: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 22 02:39:21.178: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:39:10 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:39:20 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:39:10 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:39:20 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:39:10 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:39:20 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-21 22:29:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 22 02:39:22.174: INFO: node status heartbeat is unchanged for 1.001116759s, waiting for 1m20s Nov 22 02:39:23.175: INFO: node status heartbeat is unchanged for 2.002123395s, waiting for 1m20s Nov 22 02:39:24.173: INFO: node status heartbeat is unchanged for 3.000194532s, waiting for 1m20s Nov 22 02:39:25.173: INFO: node status heartbeat is unchanged for 4.000302491s, waiting for 1m20s Nov 22 02:39:26.174: INFO: node status heartbeat is unchanged for 5.000872965s, waiting for 1m20s Nov 22 02:39:27.174: INFO: node status heartbeat is unchanged for 6.000590191s, waiting for 1m20s Nov 22 02:39:28.177: INFO: node status heartbeat is unchanged for 7.003541003s, waiting for 1m20s Nov 22 02:39:29.175: INFO: node status heartbeat is unchanged for 8.00155738s, waiting for 1m20s Nov 22 02:39:30.174: INFO: node status heartbeat is unchanged for 9.000624612s, waiting for 1m20s Nov 22 02:39:31.176: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 22 02:39:31.181: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:39:20 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:39:30 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:39:20 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:39:30 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:39:20 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:39:30 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-21 22:29:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 22 02:39:32.176: INFO: node status heartbeat is unchanged for 999.659926ms, waiting for 1m20s Nov 22 02:39:33.174: INFO: node status heartbeat is unchanged for 1.997736222s, waiting for 1m20s Nov 22 02:39:34.175: INFO: node status heartbeat is unchanged for 2.998621515s, waiting for 1m20s Nov 22 02:39:35.173: INFO: node status heartbeat is unchanged for 3.997058213s, waiting for 1m20s Nov 22 02:39:36.175: INFO: node status heartbeat is unchanged for 4.998702983s, waiting for 1m20s Nov 22 02:39:37.176: INFO: node status heartbeat is unchanged for 5.999436221s, waiting for 1m20s Nov 22 02:39:38.176: INFO: node status heartbeat is unchanged for 6.999932689s, waiting for 1m20s Nov 22 02:39:39.175: INFO: node status heartbeat is unchanged for 7.998559823s, waiting for 1m20s Nov 22 02:39:40.173: INFO: node status heartbeat is unchanged for 8.997248323s, waiting for 1m20s Nov 22 02:39:41.176: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 22 02:39:41.180: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:39:30 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:39:40 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:39:30 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:39:40 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:39:30 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:39:40 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-21 22:29:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 22 02:39:42.176: INFO: node status heartbeat is unchanged for 999.923186ms, waiting for 1m20s Nov 22 02:39:43.177: INFO: node status heartbeat is unchanged for 2.001176684s, waiting for 1m20s Nov 22 02:39:44.172: INFO: node status heartbeat is unchanged for 2.996600542s, waiting for 1m20s Nov 22 02:39:45.174: INFO: node status heartbeat is unchanged for 3.997982327s, waiting for 1m20s Nov 22 02:39:46.173: INFO: node status heartbeat is unchanged for 4.997010636s, waiting for 1m20s Nov 22 02:39:47.175: INFO: node status heartbeat is unchanged for 5.999133123s, waiting for 1m20s Nov 22 02:39:48.175: INFO: node status heartbeat is unchanged for 6.998684612s, waiting for 1m20s Nov 22 02:39:49.173: INFO: node status heartbeat is unchanged for 7.997375054s, waiting for 1m20s Nov 22 02:39:50.174: INFO: node status heartbeat is unchanged for 8.997915661s, waiting for 1m20s Nov 22 02:39:51.175: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 22 02:39:51.179: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:39:40 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:39:50 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:39:40 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:39:50 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:39:40 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:39:50 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-21 22:29:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 22 02:39:52.174: INFO: node status heartbeat is unchanged for 999.401525ms, waiting for 1m20s Nov 22 02:39:53.174: INFO: node status heartbeat is unchanged for 1.999271608s, waiting for 1m20s Nov 22 02:39:54.174: INFO: node status heartbeat is unchanged for 2.99870808s, waiting for 1m20s Nov 22 02:39:55.174: INFO: node status heartbeat is unchanged for 3.999335815s, waiting for 1m20s Nov 22 02:39:56.175: INFO: node status heartbeat is unchanged for 5.000556788s, waiting for 1m20s Nov 22 02:39:57.173: INFO: node status heartbeat is unchanged for 5.998416036s, waiting for 1m20s Nov 22 02:39:58.174: INFO: node status heartbeat is unchanged for 6.999209523s, waiting for 1m20s Nov 22 02:39:59.175: INFO: node status heartbeat is unchanged for 7.999891655s, waiting for 1m20s Nov 22 02:40:00.174: INFO: node status heartbeat is unchanged for 8.99889559s, waiting for 1m20s Nov 22 02:40:01.173: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 22 02:40:01.178: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:39:50 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:40:00 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:39:50 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:40:00 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:39:50 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:40:00 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-21 22:29:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 22 02:40:02.175: INFO: node status heartbeat is unchanged for 1.001370865s, waiting for 1m20s Nov 22 02:40:03.174: INFO: node status heartbeat is unchanged for 2.000518392s, waiting for 1m20s Nov 22 02:40:04.174: INFO: node status heartbeat is unchanged for 3.000532616s, waiting for 1m20s Nov 22 02:40:05.175: INFO: node status heartbeat is unchanged for 4.001244855s, waiting for 1m20s Nov 22 02:40:06.173: INFO: node status heartbeat is unchanged for 4.999681745s, waiting for 1m20s Nov 22 02:40:07.173: INFO: node status heartbeat is unchanged for 5.999906382s, waiting for 1m20s Nov 22 02:40:08.174: INFO: node status heartbeat is unchanged for 7.000423074s, waiting for 1m20s Nov 22 02:40:09.175: INFO: node status heartbeat is unchanged for 8.001409033s, waiting for 1m20s Nov 22 02:40:10.174: INFO: node status heartbeat is unchanged for 9.000418856s, waiting for 1m20s Nov 22 02:40:11.174: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 22 02:40:11.178: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:40:00 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:40:10 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:40:00 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:40:10 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:40:00 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:40:10 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-21 22:29:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 22 02:40:12.173: INFO: node status heartbeat is unchanged for 999.086453ms, waiting for 1m20s Nov 22 02:40:13.175: INFO: node status heartbeat is unchanged for 2.001706502s, waiting for 1m20s Nov 22 02:40:14.174: INFO: node status heartbeat is unchanged for 3.000849168s, waiting for 1m20s Nov 22 02:40:15.174: INFO: node status heartbeat is unchanged for 4.000202697s, waiting for 1m20s Nov 22 02:40:16.173: INFO: node status heartbeat is unchanged for 4.998995107s, waiting for 1m20s Nov 22 02:40:17.173: INFO: node status heartbeat is unchanged for 5.99948828s, waiting for 1m20s Nov 22 02:40:18.173: INFO: node status heartbeat is unchanged for 6.999621711s, waiting for 1m20s Nov 22 02:40:19.174: INFO: node status heartbeat is unchanged for 8.000193893s, waiting for 1m20s Nov 22 02:40:20.173: INFO: node status heartbeat is unchanged for 8.999704721s, waiting for 1m20s Nov 22 02:40:21.176: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 22 02:40:21.180: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-21 22:29:33 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:40:10 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:40:20 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:40:10 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:40:20 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:40:10 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-22 02:40:20 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-21 22:25:50 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-21 22:29:30 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 22 02:40:22.173: INFO: node status heartbeat is unchanged for 996.802046ms, waiting for 1m20s Nov 22 02:40:23.176: INFO: node status heartbeat is unchanged for 1.999764248s, waiting for 1m20s Nov 22 02:40:24.176: INFO: node status heartbeat is unchanged for 2.999906994s, waiting for 1m20s Nov 22 02:40:25.173: INFO: node status heartbeat is unchanged for 3.996844196s, waiting for 1m20s Nov 22 02:40:26.174: INFO: node status heartbeat is unchanged for 4.997751989s, waiting for 1m20s Nov 22 02:40:26.177: INFO: node status heartbeat is unchanged for 5.000782992s, waiting for 1m20s STEP: verify node is still in ready status even though node status report is infrequent [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 02:40:26.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-9710" for this suite. • [SLOW TEST:300.051 seconds] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when the NodeLease feature is enabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:49 the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently","total":-1,"completed":4,"skipped":217,"failed":0} Nov 22 02:40:26.197: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 02:36:26.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:722 Nov 22 02:36:26.782: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 22 02:36:28.787: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 22 02:36:30.787: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 22 02:36:32.789: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 22 02:36:34.788: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 22 02:36:36.787: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 22 02:36:38.786: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 22 02:36:40.788: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 22 02:36:42.790: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 22 02:36:44.788: INFO: The status of Pod back-off-cap is Running (Ready = true) STEP: getting restart delay when capped Nov 22 02:47:57.104: INFO: getRestartDelay: restartCount = 7, finishedAt=2021-11-22 02:42:51 +0000 UTC restartedAt=2021-11-22 02:47:55 +0000 UTC (5m4s) Nov 22 02:53:05.434: INFO: getRestartDelay: restartCount = 8, finishedAt=2021-11-22 02:48:00 +0000 UTC restartedAt=2021-11-22 02:53:05 +0000 UTC (5m5s) Nov 22 02:58:17.732: INFO: getRestartDelay: restartCount = 9, finishedAt=2021-11-22 02:53:10 +0000 UTC restartedAt=2021-11-22 02:58:16 +0000 UTC (5m6s) STEP: getting restart delay after a capped delay Nov 22 03:03:24.050: INFO: getRestartDelay: restartCount = 10, finishedAt=2021-11-22 02:58:21 +0000 UTC restartedAt=2021-11-22 03:03:22 +0000 UTC (5m1s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 03:03:24.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3454" for this suite. • [SLOW TEST:1617.311 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:722 ------------------------------ {"msg":"PASSED [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]","total":-1,"completed":8,"skipped":926,"failed":0} Nov 22 03:03:24.062: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-node] Probing container should be restarted startup probe fails","total":-1,"completed":3,"skipped":327,"failed":0} Nov 22 02:37:28.175: INFO: Running AfterSuite actions on all nodes Nov 22 03:03:24.100: INFO: Running AfterSuite actions on node 1 Nov 22 03:03:24.100: INFO: Skipping dumping logs from cluster Ran 53 of 5770 Specs in 1734.244 seconds SUCCESS! -- 53 Passed | 0 Failed | 0 Pending | 5717 Skipped Ginkgo ran 1 suite in 28m55.780986438s Test Suite Passed