Running Suite: Kubernetes e2e suite =================================== Random Seed: 1635566488 - Will randomize all specs Will run 5770 specs Running in parallel across 10 nodes Oct 30 04:01:29.638: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:01:29.642: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Oct 30 04:01:29.672: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 30 04:01:29.723: INFO: The status of Pod cmk-init-discover-node1-n4mcc is Succeeded, skipping waiting Oct 30 04:01:29.723: INFO: The status of Pod cmk-init-discover-node2-2fmmt is Succeeded, skipping waiting Oct 30 04:01:29.723: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 30 04:01:29.723: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Oct 30 04:01:29.723: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Oct 30 04:01:29.741: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Oct 30 04:01:29.741: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Oct 30 04:01:29.741: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Oct 30 04:01:29.741: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Oct 30 04:01:29.741: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Oct 30 04:01:29.741: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Oct 30 04:01:29.741: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Oct 30 04:01:29.741: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Oct 30 04:01:29.741: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Oct 30 04:01:29.741: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Oct 30 04:01:29.741: INFO: e2e test version: v1.21.5 Oct 30 04:01:29.741: INFO: kube-apiserver version: v1.21.1 Oct 30 04:01:29.742: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:01:29.748: INFO: Cluster IP family: ipv4 Oct 30 04:01:29.761: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:01:29.781: INFO: Cluster IP family: ipv4 Oct 30 04:01:29.758: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:01:29.782: INFO: Cluster IP family: ipv4 Oct 30 04:01:29.768: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:01:29.790: INFO: Cluster IP family: ipv4 Oct 30 04:01:29.777: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:01:29.794: INFO: Cluster IP family: ipv4 Oct 30 04:01:29.776: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:01:29.795: INFO: Cluster IP family: ipv4 Oct 30 04:01:29.776: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:01:29.797: INFO: Cluster IP family: ipv4 Oct 30 04:01:29.828: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:01:29.845: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Oct 30 04:01:29.898: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:01:29.937: INFO: Cluster IP family: ipv4 S ------------------------------ Oct 30 04:01:29.884: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:01:29.937: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:01:30.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crictl W1030 04:01:30.212247 30 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 04:01:30.212: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 04:01:30.214: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:33 Oct 30 04:01:30.216: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:01:30.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crictl-3511" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should be able to run crictl on the node [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:40 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:35 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:01:29.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime W1030 04:01:30.006955 31 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 04:01:30.007: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 04:01:30.010: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:01:44.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9766" for this suite. • [SLOW TEST:14.125 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":1,"skipped":54,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:01:30.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api W1030 04:01:30.281463 35 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 04:01:30.281: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 04:01:30.283: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109 STEP: Creating a pod to test downward api env vars Oct 30 04:01:30.297: INFO: Waiting up to 5m0s for pod "downward-api-b0d1e62b-8a4f-4dd8-b094-ee485d41545a" in namespace "downward-api-4854" to be "Succeeded or Failed" Oct 30 04:01:30.300: INFO: Pod "downward-api-b0d1e62b-8a4f-4dd8-b094-ee485d41545a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.879814ms Oct 30 04:01:32.305: INFO: Pod "downward-api-b0d1e62b-8a4f-4dd8-b094-ee485d41545a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007174347s Oct 30 04:01:34.309: INFO: Pod "downward-api-b0d1e62b-8a4f-4dd8-b094-ee485d41545a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011531051s Oct 30 04:01:36.313: INFO: Pod "downward-api-b0d1e62b-8a4f-4dd8-b094-ee485d41545a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015331898s Oct 30 04:01:38.316: INFO: Pod "downward-api-b0d1e62b-8a4f-4dd8-b094-ee485d41545a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018429529s Oct 30 04:01:40.320: INFO: Pod "downward-api-b0d1e62b-8a4f-4dd8-b094-ee485d41545a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.022453209s Oct 30 04:01:42.325: INFO: Pod "downward-api-b0d1e62b-8a4f-4dd8-b094-ee485d41545a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.02735898s Oct 30 04:01:44.327: INFO: Pod "downward-api-b0d1e62b-8a4f-4dd8-b094-ee485d41545a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.030011847s Oct 30 04:01:46.333: INFO: Pod "downward-api-b0d1e62b-8a4f-4dd8-b094-ee485d41545a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.035915571s Oct 30 04:01:48.337: INFO: Pod "downward-api-b0d1e62b-8a4f-4dd8-b094-ee485d41545a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.039924511s Oct 30 04:01:50.340: INFO: Pod "downward-api-b0d1e62b-8a4f-4dd8-b094-ee485d41545a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.043023735s STEP: Saw pod success Oct 30 04:01:50.340: INFO: Pod "downward-api-b0d1e62b-8a4f-4dd8-b094-ee485d41545a" satisfied condition "Succeeded or Failed" Oct 30 04:01:50.343: INFO: Trying to get logs from node node2 pod downward-api-b0d1e62b-8a4f-4dd8-b094-ee485d41545a container dapi-container: STEP: delete the pod Oct 30 04:01:50.356: INFO: Waiting for pod downward-api-b0d1e62b-8a4f-4dd8-b094-ee485d41545a to disappear Oct 30 04:01:50.358: INFO: Pod downward-api-b0d1e62b-8a4f-4dd8-b094-ee485d41545a no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:01:50.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4854" for this suite. • [SLOW TEST:20.124 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":1,"skipped":144,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:01:29.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test W1030 04:01:29.951135 34 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 04:01:29.952: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 04:01:29.953: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 Oct 30 04:01:29.975: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-8505" to be "Succeeded or Failed" Oct 30 04:01:29.976: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 1.852668ms Oct 30 04:01:31.979: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004580455s Oct 30 04:01:33.983: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008667834s Oct 30 04:01:35.987: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012779918s Oct 30 04:01:37.991: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016582152s Oct 30 04:01:39.994: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 10.019901075s Oct 30 04:01:41.998: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 12.02385887s Oct 30 04:01:44.002: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 14.02770863s Oct 30 04:01:46.006: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 16.03181983s Oct 30 04:01:48.010: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 18.035344388s Oct 30 04:01:50.023: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 20.04865635s Oct 30 04:01:52.028: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.053004081s Oct 30 04:01:52.028: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:01:52.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8505" for this suite. • [SLOW TEST:22.152 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":1,"skipped":35,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:01:29.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test W1030 04:01:29.883627 29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 04:01:29.883: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 04:01:29.886: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367 Oct 30 04:01:29.942: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-506cd097-717d-406b-9681-1cb48ae04fad" in namespace "security-context-test-933" to be "Succeeded or Failed" Oct 30 04:01:29.944: INFO: Pod "alpine-nnp-true-506cd097-717d-406b-9681-1cb48ae04fad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07071ms Oct 30 04:01:31.948: INFO: Pod "alpine-nnp-true-506cd097-717d-406b-9681-1cb48ae04fad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005678261s Oct 30 04:01:33.953: INFO: Pod "alpine-nnp-true-506cd097-717d-406b-9681-1cb48ae04fad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01038798s Oct 30 04:01:35.956: INFO: Pod "alpine-nnp-true-506cd097-717d-406b-9681-1cb48ae04fad": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01409802s Oct 30 04:01:37.961: INFO: Pod "alpine-nnp-true-506cd097-717d-406b-9681-1cb48ae04fad": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019096334s Oct 30 04:01:39.981: INFO: Pod "alpine-nnp-true-506cd097-717d-406b-9681-1cb48ae04fad": Phase="Pending", Reason="", readiness=false. Elapsed: 10.038250775s Oct 30 04:01:41.983: INFO: Pod "alpine-nnp-true-506cd097-717d-406b-9681-1cb48ae04fad": Phase="Pending", Reason="", readiness=false. Elapsed: 12.041035452s Oct 30 04:01:43.986: INFO: Pod "alpine-nnp-true-506cd097-717d-406b-9681-1cb48ae04fad": Phase="Pending", Reason="", readiness=false. Elapsed: 14.044027618s Oct 30 04:01:45.991: INFO: Pod "alpine-nnp-true-506cd097-717d-406b-9681-1cb48ae04fad": Phase="Pending", Reason="", readiness=false. Elapsed: 16.048327027s Oct 30 04:01:47.999: INFO: Pod "alpine-nnp-true-506cd097-717d-406b-9681-1cb48ae04fad": Phase="Pending", Reason="", readiness=false. Elapsed: 18.057017662s Oct 30 04:01:50.024: INFO: Pod "alpine-nnp-true-506cd097-717d-406b-9681-1cb48ae04fad": Phase="Pending", Reason="", readiness=false. Elapsed: 20.081191623s Oct 30 04:01:52.027: INFO: Pod "alpine-nnp-true-506cd097-717d-406b-9681-1cb48ae04fad": Phase="Pending", Reason="", readiness=false. Elapsed: 22.084913213s Oct 30 04:01:54.032: INFO: Pod "alpine-nnp-true-506cd097-717d-406b-9681-1cb48ae04fad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.089663172s Oct 30 04:01:54.032: INFO: Pod "alpine-nnp-true-506cd097-717d-406b-9681-1cb48ae04fad" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:01:54.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-933" for this suite. • [SLOW TEST:24.189 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":25,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:01:54.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should update ConfigMap successfully /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/configmap.go:140 STEP: Creating ConfigMap configmap-6655/configmap-test-06a7e990-85e4-4c19-8e53-cd2aeb644ab6 STEP: Updating configMap configmap-6655/configmap-test-06a7e990-85e4-4c19-8e53-cd2aeb644ab6 STEP: Verifying update of ConfigMap configmap-6655/configmap-test-06a7e990-85e4-4c19-8e53-cd2aeb644ab6 [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:01:54.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6655" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":2,"skipped":43,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:01:30.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups Oct 30 04:01:30.538: INFO: Waiting up to 5m0s for pod "security-context-dac2b433-a01c-491d-8918-c7bd19606bfb" in namespace "security-context-6696" to be "Succeeded or Failed" Oct 30 04:01:30.540: INFO: Pod "security-context-dac2b433-a01c-491d-8918-c7bd19606bfb": Phase="Pending", Reason="", readiness=false. Elapsed: 1.943324ms Oct 30 04:01:32.544: INFO: Pod "security-context-dac2b433-a01c-491d-8918-c7bd19606bfb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006268852s Oct 30 04:01:34.549: INFO: Pod "security-context-dac2b433-a01c-491d-8918-c7bd19606bfb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010554167s Oct 30 04:01:36.555: INFO: Pod "security-context-dac2b433-a01c-491d-8918-c7bd19606bfb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016396249s Oct 30 04:01:38.558: INFO: Pod "security-context-dac2b433-a01c-491d-8918-c7bd19606bfb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020227536s Oct 30 04:01:40.561: INFO: Pod "security-context-dac2b433-a01c-491d-8918-c7bd19606bfb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.023104859s Oct 30 04:01:42.566: INFO: Pod "security-context-dac2b433-a01c-491d-8918-c7bd19606bfb": Phase="Pending", Reason="", readiness=false. Elapsed: 12.027564671s Oct 30 04:01:44.570: INFO: Pod "security-context-dac2b433-a01c-491d-8918-c7bd19606bfb": Phase="Pending", Reason="", readiness=false. Elapsed: 14.031821896s Oct 30 04:01:46.574: INFO: Pod "security-context-dac2b433-a01c-491d-8918-c7bd19606bfb": Phase="Pending", Reason="", readiness=false. Elapsed: 16.035652685s Oct 30 04:01:48.577: INFO: Pod "security-context-dac2b433-a01c-491d-8918-c7bd19606bfb": Phase="Pending", Reason="", readiness=false. Elapsed: 18.038996746s Oct 30 04:01:50.582: INFO: Pod "security-context-dac2b433-a01c-491d-8918-c7bd19606bfb": Phase="Pending", Reason="", readiness=false. Elapsed: 20.043973434s Oct 30 04:01:52.589: INFO: Pod "security-context-dac2b433-a01c-491d-8918-c7bd19606bfb": Phase="Pending", Reason="", readiness=false. Elapsed: 22.050351608s Oct 30 04:01:54.593: INFO: Pod "security-context-dac2b433-a01c-491d-8918-c7bd19606bfb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.055176747s STEP: Saw pod success Oct 30 04:01:54.593: INFO: Pod "security-context-dac2b433-a01c-491d-8918-c7bd19606bfb" satisfied condition "Succeeded or Failed" Oct 30 04:01:54.596: INFO: Trying to get logs from node node2 pod security-context-dac2b433-a01c-491d-8918-c7bd19606bfb container test-container: STEP: delete the pod Oct 30 04:01:54.607: INFO: Waiting for pod security-context-dac2b433-a01c-491d-8918-c7bd19606bfb to disappear Oct 30 04:01:54.609: INFO: Pod security-context-dac2b433-a01c-491d-8918-c7bd19606bfb no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:01:54.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-6696" for this suite. • [SLOW TEST:24.111 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":1,"skipped":279,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:01:30.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test W1030 04:01:30.132194 26 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 04:01:30.132: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 04:01:30.133: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 Oct 30 04:01:30.152: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-12d1c7d5-b8ea-479c-bd71-be8d48da5f2f" in namespace "security-context-test-5364" to be "Succeeded or Failed" Oct 30 04:01:30.154: INFO: Pod "busybox-readonly-true-12d1c7d5-b8ea-479c-bd71-be8d48da5f2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.613226ms Oct 30 04:01:32.157: INFO: Pod "busybox-readonly-true-12d1c7d5-b8ea-479c-bd71-be8d48da5f2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005026232s Oct 30 04:01:34.160: INFO: Pod "busybox-readonly-true-12d1c7d5-b8ea-479c-bd71-be8d48da5f2f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008723116s Oct 30 04:01:36.168: INFO: Pod "busybox-readonly-true-12d1c7d5-b8ea-479c-bd71-be8d48da5f2f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016500133s Oct 30 04:01:38.172: INFO: Pod "busybox-readonly-true-12d1c7d5-b8ea-479c-bd71-be8d48da5f2f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020210194s Oct 30 04:01:40.176: INFO: Pod "busybox-readonly-true-12d1c7d5-b8ea-479c-bd71-be8d48da5f2f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.024620272s Oct 30 04:01:42.183: INFO: Pod "busybox-readonly-true-12d1c7d5-b8ea-479c-bd71-be8d48da5f2f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.030860264s Oct 30 04:01:44.188: INFO: Pod "busybox-readonly-true-12d1c7d5-b8ea-479c-bd71-be8d48da5f2f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.036440087s Oct 30 04:01:46.192: INFO: Pod "busybox-readonly-true-12d1c7d5-b8ea-479c-bd71-be8d48da5f2f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.040076068s Oct 30 04:01:48.195: INFO: Pod "busybox-readonly-true-12d1c7d5-b8ea-479c-bd71-be8d48da5f2f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.043110006s Oct 30 04:01:50.197: INFO: Pod "busybox-readonly-true-12d1c7d5-b8ea-479c-bd71-be8d48da5f2f": Phase="Pending", Reason="", readiness=false. Elapsed: 20.045661696s Oct 30 04:01:52.201: INFO: Pod "busybox-readonly-true-12d1c7d5-b8ea-479c-bd71-be8d48da5f2f": Phase="Pending", Reason="", readiness=false. Elapsed: 22.049239207s Oct 30 04:01:54.205: INFO: Pod "busybox-readonly-true-12d1c7d5-b8ea-479c-bd71-be8d48da5f2f": Phase="Pending", Reason="", readiness=false. Elapsed: 24.053235216s Oct 30 04:01:56.209: INFO: Pod "busybox-readonly-true-12d1c7d5-b8ea-479c-bd71-be8d48da5f2f": Phase="Pending", Reason="", readiness=false. Elapsed: 26.057259734s Oct 30 04:01:58.212: INFO: Pod "busybox-readonly-true-12d1c7d5-b8ea-479c-bd71-be8d48da5f2f": Phase="Failed", Reason="", readiness=false. Elapsed: 28.060182706s Oct 30 04:01:58.212: INFO: Pod "busybox-readonly-true-12d1c7d5-b8ea-479c-bd71-be8d48da5f2f" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:01:58.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5364" for this suite. • [SLOW TEST:28.111 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with readOnlyRootFilesystem /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171 should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":130,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:01:58.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 Oct 30 04:01:58.270: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:01:58.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-4906" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.028 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 can disable an AppArmor profile, using unconfined [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:47 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:284 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:01:52.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should run a Pod requesting a RuntimeClass with scheduling without taints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:125 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a label on the found node. STEP: verifying the node has the label foo-707747a9-be3b-48a7-b040-185391781bba bar STEP: verifying the node has the label fizz-dc6eafad-ba2b-46a6-9790-1009d99b1c4e buzz STEP: Trying to create runtimeclass and pod STEP: removing the label fizz-dc6eafad-ba2b-46a6-9790-1009d99b1c4e off the node node1 STEP: verifying the node doesn't have the label fizz-dc6eafad-ba2b-46a6-9790-1009d99b1c4e STEP: removing the label foo-707747a9-be3b-48a7-b040-185391781bba off the node node1 STEP: verifying the node doesn't have the label foo-707747a9-be3b-48a7-b040-185391781bba [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:02:00.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-4649" for this suite. • [SLOW TEST:8.125 seconds] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should run a Pod requesting a RuntimeClass with scheduling without taints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:125 ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling without taints ","total":-1,"completed":2,"skipped":69,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:01:50.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 Oct 30 04:01:50.475: INFO: Waiting up to 5m0s for pod "busybox-user-0-aec04db0-c116-44ba-b9c6-9a86188115cf" in namespace "security-context-test-1622" to be "Succeeded or Failed" Oct 30 04:01:50.477: INFO: Pod "busybox-user-0-aec04db0-c116-44ba-b9c6-9a86188115cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021962ms Oct 30 04:01:52.481: INFO: Pod "busybox-user-0-aec04db0-c116-44ba-b9c6-9a86188115cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005469756s Oct 30 04:01:54.485: INFO: Pod "busybox-user-0-aec04db0-c116-44ba-b9c6-9a86188115cf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009989035s Oct 30 04:01:56.490: INFO: Pod "busybox-user-0-aec04db0-c116-44ba-b9c6-9a86188115cf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014836254s Oct 30 04:01:58.494: INFO: Pod "busybox-user-0-aec04db0-c116-44ba-b9c6-9a86188115cf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019297495s Oct 30 04:02:00.497: INFO: Pod "busybox-user-0-aec04db0-c116-44ba-b9c6-9a86188115cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.022325227s Oct 30 04:02:00.497: INFO: Pod "busybox-user-0-aec04db0-c116-44ba-b9c6-9a86188115cf" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:02:00.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1622" for this suite. • [SLOW TEST:10.063 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsUser /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50 should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":178,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:01:54.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Oct 30 04:01:54.754: INFO: Waiting up to 5m0s for pod "security-context-b2d97444-8b1d-46e8-ba9c-13261bae2e56" in namespace "security-context-8138" to be "Succeeded or Failed" Oct 30 04:01:54.757: INFO: Pod "security-context-b2d97444-8b1d-46e8-ba9c-13261bae2e56": Phase="Pending", Reason="", readiness=false. Elapsed: 3.632007ms Oct 30 04:01:56.763: INFO: Pod "security-context-b2d97444-8b1d-46e8-ba9c-13261bae2e56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009610898s Oct 30 04:01:58.766: INFO: Pod "security-context-b2d97444-8b1d-46e8-ba9c-13261bae2e56": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012382376s Oct 30 04:02:00.770: INFO: Pod "security-context-b2d97444-8b1d-46e8-ba9c-13261bae2e56": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016133999s Oct 30 04:02:02.774: INFO: Pod "security-context-b2d97444-8b1d-46e8-ba9c-13261bae2e56": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019801711s Oct 30 04:02:04.778: INFO: Pod "security-context-b2d97444-8b1d-46e8-ba9c-13261bae2e56": Phase="Pending", Reason="", readiness=false. Elapsed: 10.023958258s Oct 30 04:02:06.782: INFO: Pod "security-context-b2d97444-8b1d-46e8-ba9c-13261bae2e56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.028468908s STEP: Saw pod success Oct 30 04:02:06.782: INFO: Pod "security-context-b2d97444-8b1d-46e8-ba9c-13261bae2e56" satisfied condition "Succeeded or Failed" Oct 30 04:02:06.784: INFO: Trying to get logs from node node2 pod security-context-b2d97444-8b1d-46e8-ba9c-13261bae2e56 container test-container: STEP: delete the pod Oct 30 04:02:06.799: INFO: Waiting for pod security-context-b2d97444-8b1d-46e8-ba9c-13261bae2e56 to disappear Oct 30 04:02:06.801: INFO: Pod security-context-b2d97444-8b1d-46e8-ba9c-13261bae2e56 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:02:06.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-8138" for this suite. • [SLOW TEST:12.096 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]","total":-1,"completed":2,"skipped":326,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:02:07.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] should have OwnerReferences set /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:88 [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:02:07.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-4403" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":3,"skipped":665,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:02:00.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:159 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:02:08.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1469" for this suite. • [SLOW TEST:8.046 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:159 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":3,"skipped":336,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:01:58.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 30 04:02:11.431: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:02:11.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2834" for this suite. • [SLOW TEST:13.109 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":2,"skipped":177,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:01:44.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274 STEP: Creating pod liveness-13b5e840-9b2b-42e1-9902-4935916ff0bd in namespace container-probe-2020 Oct 30 04:01:58.498: INFO: Started pod liveness-13b5e840-9b2b-42e1-9902-4935916ff0bd in namespace container-probe-2020 STEP: checking the pod's current state and verifying that restartCount is present Oct 30 04:01:58.501: INFO: Initial restart count of pod liveness-13b5e840-9b2b-42e1-9902-4935916ff0bd is 0 Oct 30 04:02:18.545: INFO: Restart count of pod container-probe-2020/liveness-13b5e840-9b2b-42e1-9902-4935916ff0bd is now 1 (20.043895322s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:02:18.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2020" for this suite. • [SLOW TEST:34.099 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":2,"skipped":236,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:02:07.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Oct 30 04:02:07.512: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that prints his name and namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:134 STEP: creating the pod Oct 30 04:02:07.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-320 create -f -' Oct 30 04:02:08.042: INFO: stderr: "" Oct 30 04:02:08.042: INFO: stdout: "pod/dapi-test-pod created\n" STEP: checking if name and namespace were passed correctly Oct 30 04:02:20.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-320 logs dapi-test-pod test-container' Oct 30 04:02:20.207: INFO: stderr: "" Oct 30 04:02:20.207: INFO: stdout: "KUBERNETES_PORT=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_PORT=443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-320\nMY_POD_IP=10.244.4.123\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.208\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" Oct 30 04:02:20.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-320 logs dapi-test-pod test-container' Oct 30 04:02:20.364: INFO: stderr: "" Oct 30 04:02:20.364: INFO: stdout: "KUBERNETES_PORT=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_PORT=443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-320\nMY_POD_IP=10.244.4.123\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.208\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:02:20.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-320" for this suite. • [SLOW TEST:12.889 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:133 should create a pod that prints his name and namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:134 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Downward API should create a pod that prints his name and namespace","total":-1,"completed":4,"skipped":669,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:02:08.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 Oct 30 04:02:08.950: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-9171" to be "Succeeded or Failed" Oct 30 04:02:08.952: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.419942ms Oct 30 04:02:10.957: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007032552s Oct 30 04:02:12.965: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015618438s Oct 30 04:02:14.969: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019186774s Oct 30 04:02:16.972: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 8.021962565s Oct 30 04:02:18.975: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 10.025431263s Oct 30 04:02:20.979: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 12.029477093s Oct 30 04:02:22.984: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.034124104s Oct 30 04:02:22.984: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:02:22.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9171" for this suite. • [SLOW TEST:14.086 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":4,"skipped":367,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:01:29.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet W1030 04:01:29.888218 27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 04:01:29.888: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 04:01:29.889: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:274 [BeforeEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:295 [It] kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 STEP: Creating a RC of 20 pods and wait until all pods of this RC are running STEP: creating replication controller cleanup20-896cbacd-9886-48ca-b99b-492ca8dcc7ee in namespace kubelet-1101 I1030 04:01:29.974098 27 runners.go:190] Created replication controller with name: cleanup20-896cbacd-9886-48ca-b99b-492ca8dcc7ee, namespace: kubelet-1101, replica count: 20 I1030 04:01:40.024858 27 runners.go:190] cleanup20-896cbacd-9886-48ca-b99b-492ca8dcc7ee Pods: 20 out of 20 created, 1 running, 19 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 04:01:50.025083 27 runners.go:190] cleanup20-896cbacd-9886-48ca-b99b-492ca8dcc7ee Pods: 20 out of 20 created, 14 running, 6 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 04:02:00.025651 27 runners.go:190] cleanup20-896cbacd-9886-48ca-b99b-492ca8dcc7ee Pods: 20 out of 20 created, 20 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 30 04:02:01.027: INFO: Checking pods on node node2 via /runningpods endpoint Oct 30 04:02:01.027: INFO: Checking pods on node node1 via /runningpods endpoint Oct 30 04:02:01.047: INFO: Resource usage on node "node1": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 1.161 6525.75 2377.38 "runtime" 0.979 2566.99 550.38 "kubelet" 0.979 2566.99 550.38 Resource usage on node "node2": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 1.710 4279.38 1175.39 "runtime" 0.913 1739.99 605.56 "kubelet" 0.913 1739.99 605.56 Resource usage on node "master1": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.612 5283.64 1777.88 "runtime" 0.136 663.59 277.63 "kubelet" 0.136 663.59 277.63 Resource usage on node "master2": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.407 3668.39 1490.29 "runtime" 0.107 558.00 222.17 "kubelet" 0.107 558.00 222.17 Resource usage on node "master3": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.518 3975.02 1746.19 "runtime" 0.108 633.37 313.36 "kubelet" 0.108 633.37 313.36 STEP: Deleting the RC STEP: deleting ReplicationController cleanup20-896cbacd-9886-48ca-b99b-492ca8dcc7ee in namespace kubelet-1101, will wait for the garbage collector to delete the pods Oct 30 04:02:01.105: INFO: Deleting ReplicationController cleanup20-896cbacd-9886-48ca-b99b-492ca8dcc7ee took: 5.019117ms Oct 30 04:02:01.706: INFO: Terminating ReplicationController cleanup20-896cbacd-9886-48ca-b99b-492ca8dcc7ee pods took: 600.817429ms Oct 30 04:02:21.407: INFO: Checking pods on node node2 via /runningpods endpoint Oct 30 04:02:21.407: INFO: Checking pods on node node1 via /runningpods endpoint Oct 30 04:02:22.989: INFO: Deleting 20 pods on 2 nodes completed in 2.581939084s after the RC was deleted Oct 30 04:02:22.989: INFO: CPU usage of containers on node "master2" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.275 0.280 0.336 0.383 0.383 0.383 "runtime" 0.000 0.000 0.088 0.088 0.101 0.101 0.101 "kubelet" 0.000 0.000 0.088 0.088 0.101 0.101 0.101 CPU usage of containers on node "master3" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.368 0.448 0.451 0.498 0.498 0.498 "runtime" 0.000 0.000 0.104 0.108 0.108 0.108 0.108 "kubelet" 0.000 0.000 0.104 0.108 0.108 0.108 0.108 CPU usage of containers on node "node1" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 1.161 1.161 1.522 1.522 1.522 "runtime" 0.000 0.000 0.470 0.470 0.470 0.470 0.470 "kubelet" 0.000 0.000 0.470 0.470 0.470 0.470 0.470 CPU usage of containers on node "node2" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 1.710 1.710 1.858 1.858 1.858 "runtime" 0.000 0.000 0.691 0.913 0.913 0.913 0.913 "kubelet" 0.000 0.000 0.691 0.913 0.913 0.913 0.913 CPU usage of containers on node "master1" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.365 0.491 0.492 0.610 0.610 0.610 "runtime" 0.000 0.000 0.132 0.132 0.133 0.133 0.133 "kubelet" 0.000 0.000 0.132 0.132 0.133 0.133 0.133 [AfterEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:326 STEP: removing the label kubelet_cleanup off the node node1 STEP: verifying the node doesn't have the label kubelet_cleanup STEP: removing the label kubelet_cleanup off the node node2 STEP: verifying the node doesn't have the label kubelet_cleanup [AfterEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:02:23.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-1101" for this suite. • [SLOW TEST:53.160 seconds] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:279 kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":1,"skipped":46,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:02:23.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should create and update a lease in the kube-node-lease namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:50 STEP: check that lease for this Kubelet exists in the kube-node-lease namespace STEP: check that node lease is updated at least once within the lease duration [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:02:23.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-4434" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":2,"skipped":285,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:02:11.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 Oct 30 04:02:11.494: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-51aa9c57-0ea8-4ed2-8a20-78a3b6311947" in namespace "security-context-test-8872" to be "Succeeded or Failed" Oct 30 04:02:11.496: INFO: Pod "busybox-privileged-true-51aa9c57-0ea8-4ed2-8a20-78a3b6311947": Phase="Pending", Reason="", readiness=false. Elapsed: 1.949224ms Oct 30 04:02:13.499: INFO: Pod "busybox-privileged-true-51aa9c57-0ea8-4ed2-8a20-78a3b6311947": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005228677s Oct 30 04:02:15.504: INFO: Pod "busybox-privileged-true-51aa9c57-0ea8-4ed2-8a20-78a3b6311947": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010505774s Oct 30 04:02:17.509: INFO: Pod "busybox-privileged-true-51aa9c57-0ea8-4ed2-8a20-78a3b6311947": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015499582s Oct 30 04:02:19.514: INFO: Pod "busybox-privileged-true-51aa9c57-0ea8-4ed2-8a20-78a3b6311947": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020426162s Oct 30 04:02:21.518: INFO: Pod "busybox-privileged-true-51aa9c57-0ea8-4ed2-8a20-78a3b6311947": Phase="Pending", Reason="", readiness=false. Elapsed: 10.024463653s Oct 30 04:02:23.521: INFO: Pod "busybox-privileged-true-51aa9c57-0ea8-4ed2-8a20-78a3b6311947": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.027733054s Oct 30 04:02:23.522: INFO: Pod "busybox-privileged-true-51aa9c57-0ea8-4ed2-8a20-78a3b6311947" satisfied condition "Succeeded or Failed" Oct 30 04:02:23.528: INFO: Got logs for pod "busybox-privileged-true-51aa9c57-0ea8-4ed2-8a20-78a3b6311947": "" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:02:23.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8872" for this suite. • [SLOW TEST:12.076 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with privileged /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232 should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 ------------------------------ SS ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":3,"skipped":180,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:02:24.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-problem-detector STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:52 Oct 30 04:02:24.115: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:02:24.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-problem-detector-5473" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should run without error [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:60 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:53 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:02:19.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 STEP: create image pull secret STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:02:24.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8367" for this suite. • [SLOW TEST:5.103 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":3,"skipped":522,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:02:20.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:02:26.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3518" for this suite. • [SLOW TEST:6.072 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":5,"skipped":699,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:02:24.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:02:30.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1887" for this suite. • [SLOW TEST:6.073 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":3,"skipped":653,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:02:27.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Oct 30 04:02:27.331: INFO: Waiting up to 5m0s for pod "security-context-90965dac-7950-4622-9043-5b2d1aeb1d17" in namespace "security-context-6098" to be "Succeeded or Failed" Oct 30 04:02:27.334: INFO: Pod "security-context-90965dac-7950-4622-9043-5b2d1aeb1d17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212082ms Oct 30 04:02:29.336: INFO: Pod "security-context-90965dac-7950-4622-9043-5b2d1aeb1d17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004776646s Oct 30 04:02:31.341: INFO: Pod "security-context-90965dac-7950-4622-9043-5b2d1aeb1d17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009947464s STEP: Saw pod success Oct 30 04:02:31.341: INFO: Pod "security-context-90965dac-7950-4622-9043-5b2d1aeb1d17" satisfied condition "Succeeded or Failed" Oct 30 04:02:31.344: INFO: Trying to get logs from node node1 pod security-context-90965dac-7950-4622-9043-5b2d1aeb1d17 container test-container: STEP: delete the pod Oct 30 04:02:31.487: INFO: Waiting for pod security-context-90965dac-7950-4622-9043-5b2d1aeb1d17 to disappear Oct 30 04:02:31.489: INFO: Pod security-context-90965dac-7950-4622-9043-5b2d1aeb1d17 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:02:31.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-6098" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":6,"skipped":1146,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:02:24.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 Oct 30 04:02:24.332: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-5f77d708-451c-428a-997b-b51d8b6a64e3" in namespace "security-context-test-3799" to be "Succeeded or Failed" Oct 30 04:02:24.335: INFO: Pod "alpine-nnp-nil-5f77d708-451c-428a-997b-b51d8b6a64e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.327318ms Oct 30 04:02:26.338: INFO: Pod "alpine-nnp-nil-5f77d708-451c-428a-997b-b51d8b6a64e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005470854s Oct 30 04:02:28.341: INFO: Pod "alpine-nnp-nil-5f77d708-451c-428a-997b-b51d8b6a64e3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008332895s Oct 30 04:02:30.344: INFO: Pod "alpine-nnp-nil-5f77d708-451c-428a-997b-b51d8b6a64e3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011734902s Oct 30 04:02:32.348: INFO: Pod "alpine-nnp-nil-5f77d708-451c-428a-997b-b51d8b6a64e3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.015975138s Oct 30 04:02:34.352: INFO: Pod "alpine-nnp-nil-5f77d708-451c-428a-997b-b51d8b6a64e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.019939802s Oct 30 04:02:34.352: INFO: Pod "alpine-nnp-nil-5f77d708-451c-428a-997b-b51d8b6a64e3" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:02:34.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3799" for this suite. • [SLOW TEST:10.161 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":4,"skipped":586,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:02:31.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Oct 30 04:02:31.731: INFO: Waiting up to 5m0s for pod "security-context-77ff66ac-da4a-47c4-9d03-33557f692f96" in namespace "security-context-9739" to be "Succeeded or Failed" Oct 30 04:02:31.733: INFO: Pod "security-context-77ff66ac-da4a-47c4-9d03-33557f692f96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.303885ms Oct 30 04:02:33.737: INFO: Pod "security-context-77ff66ac-da4a-47c4-9d03-33557f692f96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005967756s Oct 30 04:02:35.740: INFO: Pod "security-context-77ff66ac-da4a-47c4-9d03-33557f692f96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009034155s STEP: Saw pod success Oct 30 04:02:35.740: INFO: Pod "security-context-77ff66ac-da4a-47c4-9d03-33557f692f96" satisfied condition "Succeeded or Failed" Oct 30 04:02:35.742: INFO: Trying to get logs from node node1 pod security-context-77ff66ac-da4a-47c4-9d03-33557f692f96 container test-container: STEP: delete the pod Oct 30 04:02:36.019: INFO: Waiting for pod security-context-77ff66ac-da4a-47c4-9d03-33557f692f96 to disappear Oct 30 04:02:36.022: INFO: Pod security-context-77ff66ac-da4a-47c4-9d03-33557f692f96 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:02:36.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-9739" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":7,"skipped":1257,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:02:34.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:201 STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node STEP: Watching for error events or started pod STEP: Checking that the pod was rejected [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:02:36.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-5655" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]","total":-1,"completed":5,"skipped":618,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:02:36.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 Oct 30 04:02:36.897: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:02:36.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-7686" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.028 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 should enforce an AppArmor profile [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:43 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:284 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:02:23.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: waiting for pod running STEP: deleting the pod gracefully STEP: verifying the pod is running while in the graceful period termination Oct 30 04:02:51.529: INFO: pod is running [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:02:51.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-7223" for this suite. • [SLOW TEST:28.088 seconds] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 ------------------------------ {"msg":"PASSED [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process","total":-1,"completed":5,"skipped":611,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:02:37.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename mount-propagation STEP: Waiting for a default service account to be provisioned in namespace [It] should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 Oct 30 04:02:37.115: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Oct 30 04:02:39.121: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Oct 30 04:02:41.120: INFO: The status of Pod master is Running (Ready = true) Oct 30 04:02:41.135: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Oct 30 04:02:43.142: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Oct 30 04:02:45.139: INFO: The status of Pod slave is Running (Ready = true) Oct 30 04:02:45.152: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Oct 30 04:02:47.161: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Oct 30 04:02:49.158: INFO: The status of Pod private is Running (Ready = true) Oct 30 04:02:49.176: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Oct 30 04:02:51.184: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Oct 30 04:02:53.179: INFO: The status of Pod default is Running (Ready = true) Oct 30 04:02:53.184: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-8113 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:53.185: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:53.276: INFO: Exec stderr: "" Oct 30 04:02:53.278: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-8113 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:53.278: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:53.365: INFO: Exec stderr: "" Oct 30 04:02:53.368: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-8113 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:53.368: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:53.457: INFO: Exec stderr: "" Oct 30 04:02:53.459: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-8113 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:53.459: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:53.545: INFO: Exec stderr: "" Oct 30 04:02:53.548: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-8113 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:53.548: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:53.645: INFO: Exec stderr: "" Oct 30 04:02:53.648: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-8113 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:53.648: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:53.733: INFO: Exec stderr: "" Oct 30 04:02:53.736: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-8113 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:53.736: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:53.849: INFO: Exec stderr: "" Oct 30 04:02:53.852: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-8113 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:53.852: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:53.935: INFO: Exec stderr: "" Oct 30 04:02:53.938: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-8113 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:53.938: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:54.027: INFO: Exec stderr: "" Oct 30 04:02:54.030: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-8113 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:54.030: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:54.111: INFO: Exec stderr: "" Oct 30 04:02:54.113: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-8113 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:54.113: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:54.191: INFO: Exec stderr: "" Oct 30 04:02:54.194: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-8113 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:54.194: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:54.287: INFO: Exec stderr: "" Oct 30 04:02:54.289: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-8113 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:54.290: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:54.374: INFO: Exec stderr: "" Oct 30 04:02:54.378: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-8113 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:54.378: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:54.471: INFO: Exec stderr: "" Oct 30 04:02:54.474: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-8113 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:54.474: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:54.559: INFO: Exec stderr: "" Oct 30 04:02:54.562: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-8113 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:54.562: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:54.650: INFO: Exec stderr: "" Oct 30 04:02:54.652: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-master /mnt/test/master; echo master > /mnt/test/master/file] Namespace:mount-propagation-8113 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:54.652: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:54.733: INFO: Exec stderr: "" Oct 30 04:02:54.736: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-slave /mnt/test/slave; echo slave > /mnt/test/slave/file] Namespace:mount-propagation-8113 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:54.736: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:54.835: INFO: Exec stderr: "" Oct 30 04:02:54.837: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-private /mnt/test/private; echo private > /mnt/test/private/file] Namespace:mount-propagation-8113 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:54.837: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:54.919: INFO: Exec stderr: "" Oct 30 04:02:54.922: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-default /mnt/test/default; echo default > /mnt/test/default/file] Namespace:mount-propagation-8113 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:54.922: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:55.006: INFO: Exec stderr: "" Oct 30 04:02:57.029: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-8113"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-8113"/host; echo host > "/var/lib/kubelet/mount-propagation-8113"/host/file] Namespace:mount-propagation-8113 PodName:hostexec-node1-9pnd2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 30 04:02:57.029: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:57.124: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-8113 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:57.124: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:57.227: INFO: pod master mount master: stdout: "master", stderr: "" error: Oct 30 04:02:57.230: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-8113 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:57.230: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:57.312: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Oct 30 04:02:57.314: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-8113 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:57.314: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:57.406: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Oct 30 04:02:57.409: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-8113 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:57.410: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:57.492: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Oct 30 04:02:57.495: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-8113 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:57.495: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:57.577: INFO: pod master mount host: stdout: "host", stderr: "" error: Oct 30 04:02:57.578: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-8113 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:57.578: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:57.655: INFO: pod slave mount master: stdout: "master", stderr: "" error: Oct 30 04:02:57.657: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-8113 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:57.657: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:57.741: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: Oct 30 04:02:57.744: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-8113 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:57.744: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:57.823: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Oct 30 04:02:57.826: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-8113 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:57.826: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:57.909: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Oct 30 04:02:57.911: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-8113 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:57.911: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:58.012: INFO: pod slave mount host: stdout: "host", stderr: "" error: Oct 30 04:02:58.014: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-8113 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:58.014: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:58.097: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 Oct 30 04:02:58.100: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-8113 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:58.100: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:58.211: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Oct 30 04:02:58.214: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-8113 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:58.214: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:58.300: INFO: pod private mount private: stdout: "private", stderr: "" error: Oct 30 04:02:58.302: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-8113 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:58.302: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:58.384: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Oct 30 04:02:58.386: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-8113 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:58.386: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:58.468: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 Oct 30 04:02:58.471: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-8113 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:58.471: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:58.565: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 Oct 30 04:02:58.568: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-8113 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:58.568: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:58.667: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Oct 30 04:02:58.669: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-8113 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:58.670: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:58.755: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Oct 30 04:02:58.757: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-8113 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:58.757: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:58.843: INFO: pod default mount default: stdout: "default", stderr: "" error: Oct 30 04:02:58.846: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-8113 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:58.846: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:58.928: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 Oct 30 04:02:58.928: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test `cat "/var/lib/kubelet/mount-propagation-8113"/master/file` = master] Namespace:mount-propagation-8113 PodName:hostexec-node1-9pnd2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 30 04:02:58.928: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:59.040: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -e "/var/lib/kubelet/mount-propagation-8113"/slave/file] Namespace:mount-propagation-8113 PodName:hostexec-node1-9pnd2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 30 04:02:59.040: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:59.121: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/var/lib/kubelet/mount-propagation-8113"/host] Namespace:mount-propagation-8113 PodName:hostexec-node1-9pnd2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 30 04:02:59.121: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:59.238: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/default] Namespace:mount-propagation-8113 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:59.238: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:59.330: INFO: Exec stderr: "" Oct 30 04:02:59.333: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/private] Namespace:mount-propagation-8113 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:59.333: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:59.424: INFO: Exec stderr: "" Oct 30 04:02:59.426: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/slave] Namespace:mount-propagation-8113 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:59.426: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:59.521: INFO: Exec stderr: "" Oct 30 04:02:59.523: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/master] Namespace:mount-propagation-8113 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:02:59.523: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:02:59.619: INFO: Exec stderr: "" Oct 30 04:02:59.619: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -rf "/var/lib/kubelet/mount-propagation-8113"] Namespace:mount-propagation-8113 PodName:hostexec-node1-9pnd2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 30 04:02:59.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod hostexec-node1-9pnd2 in namespace mount-propagation-8113 [AfterEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:02:59.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "mount-propagation-8113" for this suite. • [SLOW TEST:22.642 seconds] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 ------------------------------ {"msg":"PASSED [sig-node] Mount propagation should propagate mounts to the host","total":-1,"completed":6,"skipped":886,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:01:54.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237 STEP: Creating pod busybox-759cd14f-ca57-4813-b0f9-770063caa726 in namespace container-probe-2876 Oct 30 04:02:06.196: INFO: Started pod busybox-759cd14f-ca57-4813-b0f9-770063caa726 in namespace container-probe-2876 Oct 30 04:02:06.196: INFO: pod container-probe-2876/busybox-759cd14f-ca57-4813-b0f9-770063caa726 is not ready (1.377µs elapsed) Oct 30 04:02:08.197: INFO: pod container-probe-2876/busybox-759cd14f-ca57-4813-b0f9-770063caa726 is not ready (2.000509793s elapsed) Oct 30 04:02:10.198: INFO: pod container-probe-2876/busybox-759cd14f-ca57-4813-b0f9-770063caa726 is not ready (4.00189783s elapsed) Oct 30 04:02:12.204: INFO: pod container-probe-2876/busybox-759cd14f-ca57-4813-b0f9-770063caa726 is not ready (6.007965454s elapsed) Oct 30 04:02:14.207: INFO: pod container-probe-2876/busybox-759cd14f-ca57-4813-b0f9-770063caa726 is not ready (8.010203048s elapsed) Oct 30 04:02:16.211: INFO: pod container-probe-2876/busybox-759cd14f-ca57-4813-b0f9-770063caa726 is not ready (10.014393072s elapsed) Oct 30 04:02:18.212: INFO: pod container-probe-2876/busybox-759cd14f-ca57-4813-b0f9-770063caa726 is not ready (12.015426589s elapsed) Oct 30 04:02:20.212: INFO: pod container-probe-2876/busybox-759cd14f-ca57-4813-b0f9-770063caa726 is not ready (14.015800815s elapsed) Oct 30 04:02:22.216: INFO: pod container-probe-2876/busybox-759cd14f-ca57-4813-b0f9-770063caa726 is not ready (16.019601668s elapsed) Oct 30 04:02:24.216: INFO: pod container-probe-2876/busybox-759cd14f-ca57-4813-b0f9-770063caa726 is not ready (18.020040829s elapsed) Oct 30 04:02:26.218: INFO: pod container-probe-2876/busybox-759cd14f-ca57-4813-b0f9-770063caa726 is not ready (20.02209651s elapsed) Oct 30 04:02:28.219: INFO: pod container-probe-2876/busybox-759cd14f-ca57-4813-b0f9-770063caa726 is not ready (22.022775484s elapsed) Oct 30 04:02:30.220: INFO: pod container-probe-2876/busybox-759cd14f-ca57-4813-b0f9-770063caa726 is not ready (24.023591484s elapsed) Oct 30 04:02:32.221: INFO: pod container-probe-2876/busybox-759cd14f-ca57-4813-b0f9-770063caa726 is not ready (26.024932556s elapsed) Oct 30 04:02:34.222: INFO: pod container-probe-2876/busybox-759cd14f-ca57-4813-b0f9-770063caa726 is not ready (28.025928581s elapsed) Oct 30 04:02:36.223: INFO: pod container-probe-2876/busybox-759cd14f-ca57-4813-b0f9-770063caa726 is not ready (30.02676354s elapsed) Oct 30 04:02:38.224: INFO: pod container-probe-2876/busybox-759cd14f-ca57-4813-b0f9-770063caa726 is not ready (32.027459529s elapsed) Oct 30 04:02:40.224: INFO: pod container-probe-2876/busybox-759cd14f-ca57-4813-b0f9-770063caa726 is not ready (34.028096104s elapsed) Oct 30 04:02:42.227: INFO: pod container-probe-2876/busybox-759cd14f-ca57-4813-b0f9-770063caa726 is not ready (36.030751436s elapsed) Oct 30 04:02:44.229: INFO: pod container-probe-2876/busybox-759cd14f-ca57-4813-b0f9-770063caa726 is not ready (38.032435604s elapsed) Oct 30 04:02:46.230: INFO: pod container-probe-2876/busybox-759cd14f-ca57-4813-b0f9-770063caa726 is not ready (40.034069657s elapsed) Oct 30 04:02:48.231: INFO: pod container-probe-2876/busybox-759cd14f-ca57-4813-b0f9-770063caa726 is not ready (42.034630027s elapsed) Oct 30 04:02:50.231: INFO: pod container-probe-2876/busybox-759cd14f-ca57-4813-b0f9-770063caa726 is not ready (44.035129301s elapsed) Oct 30 04:02:52.234: INFO: pod container-probe-2876/busybox-759cd14f-ca57-4813-b0f9-770063caa726 is not ready (46.037825783s elapsed) Oct 30 04:02:54.235: INFO: pod container-probe-2876/busybox-759cd14f-ca57-4813-b0f9-770063caa726 is not ready (48.038782483s elapsed) Oct 30 04:02:56.240: INFO: pod container-probe-2876/busybox-759cd14f-ca57-4813-b0f9-770063caa726 is not ready (50.043589358s elapsed) Oct 30 04:02:58.241: INFO: pod container-probe-2876/busybox-759cd14f-ca57-4813-b0f9-770063caa726 is not ready (52.044234791s elapsed) Oct 30 04:03:00.242: INFO: pod container-probe-2876/busybox-759cd14f-ca57-4813-b0f9-770063caa726 is not ready (54.045236262s elapsed) Oct 30 04:03:02.244: INFO: pod container-probe-2876/busybox-759cd14f-ca57-4813-b0f9-770063caa726 is not ready (56.048081182s elapsed) Oct 30 04:03:04.246: INFO: pod container-probe-2876/busybox-759cd14f-ca57-4813-b0f9-770063caa726 is not ready (58.049939419s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:03:06.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2876" for this suite. • [SLOW TEST:72.106 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237 ------------------------------ {"msg":"PASSED [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":3,"skipped":59,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:03:06.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ssh STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:36 Oct 30 04:03:06.326: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:03:06.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ssh-9414" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should SSH to all nodes and run commands [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:42 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:03:00.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-privileged-pod STEP: Waiting for a default service account to be provisioned in namespace [It] should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49 STEP: Creating a pod with a privileged container Oct 30 04:03:00.041: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Oct 30 04:03:02.045: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Oct 30 04:03:04.045: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Oct 30 04:03:06.048: INFO: The status of Pod privileged-pod is Running (Ready = true) STEP: Executing in the privileged container Oct 30 04:03:06.051: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-1508 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:03:06.051: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:03:06.507: INFO: ExecWithOptions {Command:[ip link del dummy1] Namespace:e2e-privileged-pod-1508 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:03:06.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Executing in the non-privileged container Oct 30 04:03:06.607: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-1508 PodName:privileged-pod ContainerName:not-privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:03:06.607: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:03:06.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-privileged-pod-1508" for this suite. • [SLOW TEST:6.702 seconds] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49 ------------------------------ {"msg":"PASSED [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":7,"skipped":1045,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:02:00.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:371 STEP: Creating pod startup-5e174d21-ef3f-4c4d-8649-92c45ad746dd in namespace container-probe-2033 Oct 30 04:02:14.431: INFO: Started pod startup-5e174d21-ef3f-4c4d-8649-92c45ad746dd in namespace container-probe-2033 STEP: checking the pod's current state and verifying that restartCount is present Oct 30 04:02:14.433: INFO: Initial restart count of pod startup-5e174d21-ef3f-4c4d-8649-92c45ad746dd is 0 Oct 30 04:03:14.559: INFO: Restart count of pod container-probe-2033/startup-5e174d21-ef3f-4c4d-8649-92c45ad746dd is now 1 (1m0.125328876s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:03:14.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2033" for this suite. • [SLOW TEST:74.184 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:371 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted by liveness probe after startup probe enables it","total":-1,"completed":3,"skipped":145,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:03:14.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Oct 30 04:03:14.723: INFO: Waiting up to 5m0s for pod "security-context-04a1aa4c-0e4c-4d9a-8b92-a24b5072577e" in namespace "security-context-8021" to be "Succeeded or Failed" Oct 30 04:03:14.728: INFO: Pod "security-context-04a1aa4c-0e4c-4d9a-8b92-a24b5072577e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.16814ms Oct 30 04:03:16.731: INFO: Pod "security-context-04a1aa4c-0e4c-4d9a-8b92-a24b5072577e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008703186s Oct 30 04:03:18.735: INFO: Pod "security-context-04a1aa4c-0e4c-4d9a-8b92-a24b5072577e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012512736s Oct 30 04:03:20.740: INFO: Pod "security-context-04a1aa4c-0e4c-4d9a-8b92-a24b5072577e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017661594s STEP: Saw pod success Oct 30 04:03:20.740: INFO: Pod "security-context-04a1aa4c-0e4c-4d9a-8b92-a24b5072577e" satisfied condition "Succeeded or Failed" Oct 30 04:03:20.743: INFO: Trying to get logs from node node2 pod security-context-04a1aa4c-0e4c-4d9a-8b92-a24b5072577e container test-container: STEP: delete the pod Oct 30 04:03:20.781: INFO: Waiting for pod security-context-04a1aa4c-0e4c-4d9a-8b92-a24b5072577e to disappear Oct 30 04:03:20.783: INFO: Pod security-context-04a1aa4c-0e4c-4d9a-8b92-a24b5072577e no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:03:20.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-8021" for this suite. • [SLOW TEST:6.103 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]","total":-1,"completed":4,"skipped":203,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:03:20.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Oct 30 04:03:20.898: INFO: Waiting up to 5m0s for pod "security-context-b2c9818f-f125-4c1e-a434-cbe2a2ce2cfa" in namespace "security-context-7895" to be "Succeeded or Failed" Oct 30 04:03:20.900: INFO: Pod "security-context-b2c9818f-f125-4c1e-a434-cbe2a2ce2cfa": Phase="Pending", Reason="", readiness=false. Elapsed: 1.928728ms Oct 30 04:03:22.904: INFO: Pod "security-context-b2c9818f-f125-4c1e-a434-cbe2a2ce2cfa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005832672s Oct 30 04:03:24.907: INFO: Pod "security-context-b2c9818f-f125-4c1e-a434-cbe2a2ce2cfa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009404489s STEP: Saw pod success Oct 30 04:03:24.908: INFO: Pod "security-context-b2c9818f-f125-4c1e-a434-cbe2a2ce2cfa" satisfied condition "Succeeded or Failed" Oct 30 04:03:24.911: INFO: Trying to get logs from node node2 pod security-context-b2c9818f-f125-4c1e-a434-cbe2a2ce2cfa container test-container: STEP: delete the pod Oct 30 04:03:24.941: INFO: Waiting for pod security-context-b2c9818f-f125-4c1e-a434-cbe2a2ce2cfa to disappear Oct 30 04:03:24.943: INFO: Pod security-context-b2c9818f-f125-4c1e-a434-cbe2a2ce2cfa no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:03:24.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-7895" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":5,"skipped":240,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:02:36.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:217 STEP: Creating pod busybox-5b6f304e-f718-431c-905b-088dc015ca71 in namespace container-probe-6949 Oct 30 04:02:40.360: INFO: Started pod busybox-5b6f304e-f718-431c-905b-088dc015ca71 in namespace container-probe-6949 STEP: checking the pod's current state and verifying that restartCount is present Oct 30 04:02:40.363: INFO: Initial restart count of pod busybox-5b6f304e-f718-431c-905b-088dc015ca71 is 0 Oct 30 04:03:30.467: INFO: Restart count of pod container-probe-6949/busybox-5b6f304e-f718-431c-905b-088dc015ca71 is now 1 (50.103972756s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:03:30.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6949" for this suite. • [SLOW TEST:54.162 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:217 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":8,"skipped":1411,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:02:30.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:313 STEP: Creating pod startup-b1b8913f-8818-4a71-ab56-ce8b49642678 in namespace container-probe-7525 Oct 30 04:02:38.566: INFO: Started pod startup-b1b8913f-8818-4a71-ab56-ce8b49642678 in namespace container-probe-7525 STEP: checking the pod's current state and verifying that restartCount is present Oct 30 04:02:38.568: INFO: Initial restart count of pod startup-b1b8913f-8818-4a71-ab56-ce8b49642678 is 0 Oct 30 04:03:46.714: INFO: Restart count of pod container-probe-7525/startup-b1b8913f-8818-4a71-ab56-ce8b49642678 is now 1 (1m8.146807107s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:03:46.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7525" for this suite. • [SLOW TEST:76.203 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:313 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted startup probe fails","total":-1,"completed":4,"skipped":783,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:03:30.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777 STEP: submitting the pod to kubernetes STEP: patching pod status with condition "k8s.io/test-condition1" to true STEP: patching pod status with condition "k8s.io/test-condition2" to true STEP: patching pod status with condition "k8s.io/test-condition1" to false [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:03:50.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-594" for this suite. • [SLOW TEST:20.076 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777 ------------------------------ {"msg":"PASSED [sig-node] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":9,"skipped":1509,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:03:24.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:477 STEP: Creating pod startup-override-2f6aa561-8727-4560-8947-128219eec2b5 in namespace container-probe-4970 Oct 30 04:03:31.017: INFO: Started pod startup-override-2f6aa561-8727-4560-8947-128219eec2b5 in namespace container-probe-4970 STEP: checking the pod's current state and verifying that restartCount is present Oct 30 04:03:31.019: INFO: Initial restart count of pod startup-override-2f6aa561-8727-4560-8947-128219eec2b5 is 1 Oct 30 04:03:53.080: INFO: Restart count of pod container-probe-4970/startup-override-2f6aa561-8727-4560-8947-128219eec2b5 is now 2 (22.060201058s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:03:53.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4970" for this suite. • [SLOW TEST:28.115 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:477 ------------------------------ {"msg":"PASSED [sig-node] Probing container should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod]","total":-1,"completed":6,"skipped":252,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:03:47.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:446 [It] should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 STEP: creating the pod that should always exit 0 STEP: submitting the pod to kubernetes Oct 30 04:03:47.208: INFO: Waiting up to 5m0s for pod "pod-always-succeed59ae830d-bc16-45d3-ba9c-34ebaa98d788" in namespace "pods-1164" to be "Succeeded or Failed" Oct 30 04:03:47.210: INFO: Pod "pod-always-succeed59ae830d-bc16-45d3-ba9c-34ebaa98d788": Phase="Pending", Reason="", readiness=false. Elapsed: 1.997613ms Oct 30 04:03:49.213: INFO: Pod "pod-always-succeed59ae830d-bc16-45d3-ba9c-34ebaa98d788": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005014537s Oct 30 04:03:51.221: INFO: Pod "pod-always-succeed59ae830d-bc16-45d3-ba9c-34ebaa98d788": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01292205s Oct 30 04:03:53.224: INFO: Pod "pod-always-succeed59ae830d-bc16-45d3-ba9c-34ebaa98d788": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015972467s STEP: Saw pod success Oct 30 04:03:53.224: INFO: Pod "pod-always-succeed59ae830d-bc16-45d3-ba9c-34ebaa98d788" satisfied condition "Succeeded or Failed" STEP: Getting events about the pod STEP: Checking events about the pod STEP: deleting the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:03:55.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1164" for this suite. • [SLOW TEST:8.069 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:444 should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done","total":-1,"completed":5,"skipped":1025,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:01:29.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods W1030 04:01:29.891022 37 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 04:01:29.891: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 04:01:29.933: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:202 [It] should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 STEP: creating pods that should always exit 1 and terminating the pod after a random delay Oct 30 04:01:37.765: INFO: watch delete seen for pod-submit-status-0-0 Oct 30 04:01:37.766: INFO: Pod pod-submit-status-0-0 on node node1 timings total=7.830693043s t=1.516s run=0s execute=0s Oct 30 04:01:39.157: INFO: watch delete seen for pod-submit-status-1-0 Oct 30 04:01:39.158: INFO: Pod pod-submit-status-1-0 on node node2 timings total=9.222784168s t=739ms run=0s execute=0s Oct 30 04:01:43.071: INFO: watch delete seen for pod-submit-status-0-1 Oct 30 04:01:43.071: INFO: Pod pod-submit-status-0-1 on node node1 timings total=5.30579972s t=1.431s run=0s execute=0s Oct 30 04:01:45.368: INFO: watch delete seen for pod-submit-status-0-2 Oct 30 04:01:45.368: INFO: Pod pod-submit-status-0-2 on node node1 timings total=2.296262653s t=36ms run=0s execute=0s Oct 30 04:01:48.159: INFO: watch delete seen for pod-submit-status-1-1 Oct 30 04:01:48.159: INFO: Pod pod-submit-status-1-1 on node node2 timings total=9.001496277s t=1.82s run=0s execute=0s Oct 30 04:01:49.602: INFO: watch delete seen for pod-submit-status-2-0 Oct 30 04:01:49.602: INFO: Pod pod-submit-status-2-0 on node node1 timings total=19.666920573s t=1.877s run=0s execute=0s Oct 30 04:01:50.357: INFO: watch delete seen for pod-submit-status-0-3 Oct 30 04:01:50.357: INFO: Pod pod-submit-status-0-3 on node node2 timings total=4.989485273s t=662ms run=0s execute=0s Oct 30 04:01:52.175: INFO: watch delete seen for pod-submit-status-2-1 Oct 30 04:01:52.175: INFO: Pod pod-submit-status-2-1 on node node1 timings total=2.57300019s t=43ms run=0s execute=0s Oct 30 04:01:55.757: INFO: watch delete seen for pod-submit-status-2-2 Oct 30 04:01:55.757: INFO: Pod pod-submit-status-2-2 on node node2 timings total=3.58258595s t=1.433s run=0s execute=0s Oct 30 04:01:58.555: INFO: watch delete seen for pod-submit-status-0-4 Oct 30 04:01:58.555: INFO: Pod pod-submit-status-0-4 on node node2 timings total=8.197935717s t=455ms run=0s execute=0s Oct 30 04:02:02.662: INFO: watch delete seen for pod-submit-status-2-3 Oct 30 04:02:02.662: INFO: Pod pod-submit-status-2-3 on node node2 timings total=6.904852212s t=1.052s run=0s execute=0s Oct 30 04:02:02.803: INFO: watch delete seen for pod-submit-status-1-2 Oct 30 04:02:02.803: INFO: Pod pod-submit-status-1-2 on node node1 timings total=14.644213691s t=1.037s run=0s execute=0s Oct 30 04:02:06.398: INFO: watch delete seen for pod-submit-status-1-3 Oct 30 04:02:06.398: INFO: Pod pod-submit-status-1-3 on node node1 timings total=3.594944199s t=863ms run=0s execute=0s Oct 30 04:02:07.358: INFO: watch delete seen for pod-submit-status-2-4 Oct 30 04:02:07.358: INFO: Pod pod-submit-status-2-4 on node node2 timings total=4.695383813s t=1.333s run=0s execute=0s Oct 30 04:02:14.756: INFO: watch delete seen for pod-submit-status-0-5 Oct 30 04:02:14.756: INFO: Pod pod-submit-status-0-5 on node node2 timings total=16.201136582s t=564ms run=0s execute=0s Oct 30 04:02:15.358: INFO: watch delete seen for pod-submit-status-1-4 Oct 30 04:02:15.359: INFO: Pod pod-submit-status-1-4 on node node2 timings total=8.960117828s t=367ms run=0s execute=0s Oct 30 04:02:23.860: INFO: watch delete seen for pod-submit-status-2-5 Oct 30 04:02:23.860: INFO: Pod pod-submit-status-2-5 on node node1 timings total=16.502702284s t=862ms run=0s execute=0s Oct 30 04:02:30.804: INFO: watch delete seen for pod-submit-status-1-5 Oct 30 04:02:30.804: INFO: Pod pod-submit-status-1-5 on node node1 timings total=15.445110211s t=649ms run=0s execute=0s Oct 30 04:02:32.116: INFO: watch delete seen for pod-submit-status-0-6 Oct 30 04:02:32.116: INFO: Pod pod-submit-status-0-6 on node node1 timings total=17.359272833s t=598ms run=0s execute=0s Oct 30 04:02:33.956: INFO: watch delete seen for pod-submit-status-2-6 Oct 30 04:02:33.957: INFO: Pod pod-submit-status-2-6 on node node2 timings total=10.096022116s t=1.974s run=3s execute=0s Oct 30 04:02:36.360: INFO: watch delete seen for pod-submit-status-1-6 Oct 30 04:02:36.360: INFO: Pod pod-submit-status-1-6 on node node2 timings total=5.555963564s t=1.277s run=0s execute=0s Oct 30 04:02:38.904: INFO: watch delete seen for pod-submit-status-2-7 Oct 30 04:02:38.904: INFO: Pod pod-submit-status-2-7 on node node1 timings total=4.947386727s t=1.033s run=0s execute=0s Oct 30 04:02:42.893: INFO: watch delete seen for pod-submit-status-0-7 Oct 30 04:02:42.893: INFO: Pod pod-submit-status-0-7 on node node2 timings total=10.777149448s t=1.584s run=0s execute=0s Oct 30 04:02:44.352: INFO: watch delete seen for pod-submit-status-2-8 Oct 30 04:02:44.352: INFO: Pod pod-submit-status-2-8 on node node2 timings total=5.447633622s t=913ms run=0s execute=0s Oct 30 04:02:52.894: INFO: watch delete seen for pod-submit-status-1-7 Oct 30 04:02:52.894: INFO: Pod pod-submit-status-1-7 on node node2 timings total=16.534218024s t=837ms run=0s execute=0s Oct 30 04:02:52.903: INFO: watch delete seen for pod-submit-status-0-8 Oct 30 04:02:52.903: INFO: Pod pod-submit-status-0-8 on node node2 timings total=10.010233427s t=749ms run=0s execute=0s Oct 30 04:03:02.911: INFO: watch delete seen for pod-submit-status-2-9 Oct 30 04:03:02.911: INFO: Pod pod-submit-status-2-9 on node node2 timings total=18.559062117s t=1.693s run=0s execute=0s Oct 30 04:03:02.918: INFO: watch delete seen for pod-submit-status-0-9 Oct 30 04:03:02.918: INFO: Pod pod-submit-status-0-9 on node node2 timings total=10.014903407s t=745ms run=0s execute=0s Oct 30 04:03:02.934: INFO: watch delete seen for pod-submit-status-1-8 Oct 30 04:03:02.934: INFO: Pod pod-submit-status-1-8 on node node2 timings total=10.040278591s t=1.921s run=0s execute=0s Oct 30 04:03:10.779: INFO: watch delete seen for pod-submit-status-2-10 Oct 30 04:03:10.779: INFO: Pod pod-submit-status-2-10 on node node2 timings total=7.868045152s t=441ms run=0s execute=0s Oct 30 04:03:12.860: INFO: watch delete seen for pod-submit-status-1-9 Oct 30 04:03:12.860: INFO: Pod pod-submit-status-1-9 on node node2 timings total=9.925649998s t=1.586s run=0s execute=0s Oct 30 04:03:12.871: INFO: watch delete seen for pod-submit-status-0-10 Oct 30 04:03:12.871: INFO: Pod pod-submit-status-0-10 on node node2 timings total=9.952454229s t=658ms run=0s execute=0s Oct 30 04:03:15.307: INFO: watch delete seen for pod-submit-status-2-11 Oct 30 04:03:15.307: INFO: Pod pod-submit-status-2-11 on node node2 timings total=4.527684546s t=1.374s run=0s execute=0s Oct 30 04:03:22.803: INFO: watch delete seen for pod-submit-status-0-11 Oct 30 04:03:22.803: INFO: Pod pod-submit-status-0-11 on node node1 timings total=9.932656945s t=954ms run=0s execute=0s Oct 30 04:03:22.903: INFO: watch delete seen for pod-submit-status-2-12 Oct 30 04:03:22.903: INFO: Pod pod-submit-status-2-12 on node node2 timings total=7.596038606s t=1.281s run=0s execute=0s Oct 30 04:03:22.920: INFO: watch delete seen for pod-submit-status-1-10 Oct 30 04:03:22.920: INFO: Pod pod-submit-status-1-10 on node node2 timings total=10.059940139s t=1.217s run=0s execute=0s Oct 30 04:03:32.899: INFO: watch delete seen for pod-submit-status-0-12 Oct 30 04:03:32.899: INFO: Pod pod-submit-status-0-12 on node node2 timings total=10.095450321s t=903ms run=0s execute=0s Oct 30 04:03:32.908: INFO: watch delete seen for pod-submit-status-2-13 Oct 30 04:03:32.908: INFO: Pod pod-submit-status-2-13 on node node2 timings total=10.005162171s t=1.225s run=0s execute=0s Oct 30 04:03:32.919: INFO: watch delete seen for pod-submit-status-1-11 Oct 30 04:03:32.920: INFO: Pod pod-submit-status-1-11 on node node2 timings total=9.999311122s t=688ms run=0s execute=0s Oct 30 04:03:42.899: INFO: watch delete seen for pod-submit-status-1-12 Oct 30 04:03:42.899: INFO: Pod pod-submit-status-1-12 on node node2 timings total=9.979838087s t=1.93s run=0s execute=0s Oct 30 04:03:42.913: INFO: watch delete seen for pod-submit-status-0-13 Oct 30 04:03:42.913: INFO: Pod pod-submit-status-0-13 on node node2 timings total=10.013557288s t=1.041s run=0s execute=0s Oct 30 04:03:42.924: INFO: watch delete seen for pod-submit-status-2-14 Oct 30 04:03:42.924: INFO: Pod pod-submit-status-2-14 on node node2 timings total=10.015974698s t=935ms run=0s execute=0s Oct 30 04:03:52.813: INFO: watch delete seen for pod-submit-status-1-13 Oct 30 04:03:52.813: INFO: Pod pod-submit-status-1-13 on node node1 timings total=9.913981857s t=1.777s run=0s execute=0s Oct 30 04:03:52.925: INFO: watch delete seen for pod-submit-status-0-14 Oct 30 04:03:52.926: INFO: Pod pod-submit-status-0-14 on node node2 timings total=10.012875125s t=1.782s run=0s execute=0s Oct 30 04:03:55.301: INFO: watch delete seen for pod-submit-status-1-14 Oct 30 04:03:55.301: INFO: Pod pod-submit-status-1-14 on node node1 timings total=2.487321829s t=277ms run=0s execute=0s [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:03:55.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9337" for this suite. • [SLOW TEST:145.448 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:200 should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pod Container Status should never report success for a pending container","total":-1,"completed":1,"skipped":22,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:03:55.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:400 E1030 04:03:57.519312 37 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 148 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x653b640, 0x9beb6a0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86 panic(0x653b640, 0x9beb6a0) /usr/local/go/src/runtime/panic.go:965 +0x1b9 k8s.io/kubernetes/test/e2e/framework/pod.podContainerStarted.func1(0xc002084f0c, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/resource.go:334 +0x17b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0020b4e00, 0xc002084f00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitFor(0xc004903170, 0xc0020b4e00, 0xc0049117a0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:541 +0x128 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollInternal(0xc004903170, 0xc0020b4e00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:427 +0x87 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc004903170, 0xc0020b4e00, 0xc004903170, 0xc0020b4e00) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:452 +0x74 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc0020b4e00, 0x14, 0xc000e9daa0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodContainerStarted(0x779f8f8, 0xc000e02160, 0xc0047c4a68, 0x14, 0xc000e9daa0, 0x2c, 0x0, 0x45d964b800, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/wait.go:554 +0x92 k8s.io/kubernetes/test/e2e/common/node.glob..func2.18() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:417 +0x39e k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc001267f80, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc001267f80, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc001276380, 0x768f9a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc002a25950, 0x0, 0x768f9a0, 0xc00016c840) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:215 +0x72f k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc002a25950, 0x768f9a0, 0xc00016c840) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc004b9b7c0, 0xc002a25950, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc004b9b7c0, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x147 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc004b9b7c0, 0xc0020ab608) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000170280, 0x7fc6049a7600, 0xc000703380, 0x6f05d9d, 0x14, 0xc00395aba0, 0x3, 0x3, 0x7745ab8, 0xc00016c840, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x7694a60, 0xc000703380, 0x6f05d9d, 0x14, 0xc003d12640, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x218 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x7694a60, 0xc000703380, 0x6f05d9d, 0x14, 0xc003ee2b40, 0x2, 0x2, 0x25) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000703380) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000703380) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc000703380, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "container-probe-2466". STEP: Found 5 events. Oct 30 04:03:57.523: INFO: At 2021-10-30 04:03:55 +0000 UTC - event for startup-6a7c6e92-2e1e-40dd-af8e-156dc3f6b81c: {default-scheduler } Scheduled: Successfully assigned container-probe-2466/startup-6a7c6e92-2e1e-40dd-af8e-156dc3f6b81c to node2 Oct 30 04:03:57.523: INFO: At 2021-10-30 04:03:56 +0000 UTC - event for startup-6a7c6e92-2e1e-40dd-af8e-156dc3f6b81c: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" Oct 30 04:03:57.523: INFO: At 2021-10-30 04:03:57 +0000 UTC - event for startup-6a7c6e92-2e1e-40dd-af8e-156dc3f6b81c: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" in 291.894198ms Oct 30 04:03:57.523: INFO: At 2021-10-30 04:03:57 +0000 UTC - event for startup-6a7c6e92-2e1e-40dd-af8e-156dc3f6b81c: {kubelet node2} Created: Created container busybox Oct 30 04:03:57.523: INFO: At 2021-10-30 04:03:57 +0000 UTC - event for startup-6a7c6e92-2e1e-40dd-af8e-156dc3f6b81c: {kubelet node2} Started: Started container busybox Oct 30 04:03:57.525: INFO: POD NODE PHASE GRACE CONDITIONS Oct 30 04:03:57.525: INFO: startup-6a7c6e92-2e1e-40dd-af8e-156dc3f6b81c node2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 04:03:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 04:03:55 +0000 UTC ContainersNotReady containers with unready status: [busybox]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 04:03:55 +0000 UTC ContainersNotReady containers with unready status: [busybox]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 04:03:55 +0000 UTC }] Oct 30 04:03:57.525: INFO: Oct 30 04:03:57.530: INFO: Logging node info for node master1 Oct 30 04:03:57.533: INFO: Node Info: &Node{ObjectMeta:{master1 b47c04d5-47a7-4a95-8e97-481e6e60af54 154600 0 2021-10-29 21:05:34 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:05:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-10-29 21:05:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2021-10-29 21:13:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:27 +0000 UTC,LastTransitionTime:2021-10-29 21:11:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 04:03:52 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 04:03:52 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 04:03:52 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 04:03:52 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5d3ed60c561e427db72df14bd9006ed0,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:01b9d6bc-4126-4864-a1df-901a1bee4906,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 04:03:57.534: INFO: Logging kubelet events for node master1 Oct 30 04:03:57.537: INFO: Logging pods the kubelet thinks is on node master1 Oct 30 04:03:57.561: INFO: container-registry-65d7c44b96-zzkfl started at 2021-10-29 21:12:56 +0000 UTC (0+2 container statuses recorded) Oct 30 04:03:57.561: INFO: Container docker-registry ready: true, restart count 0 Oct 30 04:03:57.561: INFO: Container nginx ready: true, restart count 0 Oct 30 04:03:57.561: INFO: node-exporter-fv84w started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 04:03:57.561: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 04:03:57.561: INFO: Container node-exporter ready: true, restart count 0 Oct 30 04:03:57.561: INFO: kube-scheduler-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:57.561: INFO: Container kube-scheduler ready: true, restart count 0 Oct 30 04:03:57.561: INFO: kube-proxy-z5k8p started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:57.561: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 04:03:57.561: INFO: coredns-8474476ff8-lczbr started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:57.561: INFO: Container coredns ready: true, restart count 1 Oct 30 04:03:57.561: INFO: kube-multus-ds-amd64-wgkfq started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:57.561: INFO: Container kube-multus ready: true, restart count 1 Oct 30 04:03:57.561: INFO: kube-apiserver-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:57.561: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 04:03:57.561: INFO: kube-controller-manager-master1 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:57.561: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 30 04:03:57.561: INFO: kube-flannel-d4pmt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 04:03:57.561: INFO: Init container install-cni ready: true, restart count 0 Oct 30 04:03:57.561: INFO: Container kube-flannel ready: true, restart count 2 W1030 04:03:57.575556 37 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 04:03:57.645: INFO: Latency metrics for node master1 Oct 30 04:03:57.645: INFO: Logging node info for node master2 Oct 30 04:03:57.649: INFO: Node Info: &Node{ObjectMeta:{master2 208792d3-d365-4ddb-83d4-10e6e818079c 154546 0 2021-10-29 21:06:06 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:06:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-10-29 21:18:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:19 +0000 UTC,LastTransitionTime:2021-10-29 21:11:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 04:03:48 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 04:03:48 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 04:03:48 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 04:03:48 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:12290c1916d84ddda20431c28083da6a,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:314e82b8-9747-4131-b883-220496309995,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 04:03:57.649: INFO: Logging kubelet events for node master2 Oct 30 04:03:57.652: INFO: Logging pods the kubelet thinks is on node master2 Oct 30 04:03:57.665: INFO: kube-apiserver-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:57.665: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 04:03:57.665: INFO: kube-controller-manager-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:57.665: INFO: Container kube-controller-manager ready: true, restart count 3 Oct 30 04:03:57.665: INFO: kube-scheduler-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:57.665: INFO: Container kube-scheduler ready: true, restart count 2 Oct 30 04:03:57.665: INFO: kube-proxy-5gz4v started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:57.665: INFO: Container kube-proxy ready: true, restart count 2 Oct 30 04:03:57.665: INFO: kube-flannel-qvqll started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 04:03:57.665: INFO: Init container install-cni ready: true, restart count 2 Oct 30 04:03:57.665: INFO: Container kube-flannel ready: true, restart count 1 Oct 30 04:03:57.665: INFO: kube-multus-ds-amd64-brkpk started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:57.666: INFO: Container kube-multus ready: true, restart count 1 Oct 30 04:03:57.666: INFO: node-exporter-lc9kk started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 04:03:57.666: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 04:03:57.666: INFO: Container node-exporter ready: true, restart count 0 W1030 04:03:57.679578 37 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 04:03:57.735: INFO: Latency metrics for node master2 Oct 30 04:03:57.735: INFO: Logging node info for node master3 Oct 30 04:03:57.738: INFO: Node Info: &Node{ObjectMeta:{master3 168f1589-e029-47ae-b194-10215fc22d6a 154541 0 2021-10-29 21:06:17 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:06:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-10-29 21:16:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-10-29 21:16:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:36 +0000 UTC,LastTransitionTime:2021-10-29 21:11:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 04:03:48 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 04:03:48 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 04:03:48 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 04:03:48 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:de18dcb6cb4c493e9f4d987da2c8b3fd,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:89235c4b-b1f5-4716-bbd7-18b41c0bde74,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 04:03:57.738: INFO: Logging kubelet events for node master3 Oct 30 04:03:57.740: INFO: Logging pods the kubelet thinks is on node master3 Oct 30 04:03:57.754: INFO: kube-apiserver-master3 started at 2021-10-29 21:11:10 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:57.754: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 04:03:57.754: INFO: kube-scheduler-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:57.754: INFO: Container kube-scheduler ready: true, restart count 2 Oct 30 04:03:57.754: INFO: dns-autoscaler-7df78bfcfb-phsdx started at 2021-10-29 21:09:02 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:57.754: INFO: Container autoscaler ready: true, restart count 1 Oct 30 04:03:57.754: INFO: node-feature-discovery-controller-cff799f9f-qq7g4 started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:57.754: INFO: Container nfd-controller ready: true, restart count 0 Oct 30 04:03:57.754: INFO: kube-controller-manager-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:57.754: INFO: Container kube-controller-manager ready: true, restart count 1 Oct 30 04:03:57.754: INFO: kube-proxy-r6fpx started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:57.754: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 04:03:57.754: INFO: kube-flannel-rbdlt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 04:03:57.754: INFO: Init container install-cni ready: true, restart count 2 Oct 30 04:03:57.754: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 04:03:57.754: INFO: kube-multus-ds-amd64-bdwh9 started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:57.754: INFO: Container kube-multus ready: true, restart count 1 Oct 30 04:03:57.754: INFO: coredns-8474476ff8-wrwwv started at 2021-10-29 21:09:00 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:57.754: INFO: Container coredns ready: true, restart count 1 Oct 30 04:03:57.754: INFO: prometheus-operator-585ccfb458-czbr2 started at 2021-10-29 21:21:06 +0000 UTC (0+2 container statuses recorded) Oct 30 04:03:57.754: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 04:03:57.754: INFO: Container prometheus-operator ready: true, restart count 0 Oct 30 04:03:57.755: INFO: node-exporter-bv946 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 04:03:57.755: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 04:03:57.755: INFO: Container node-exporter ready: true, restart count 0 W1030 04:03:57.768744 37 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 04:03:57.854: INFO: Latency metrics for node master3 Oct 30 04:03:57.854: INFO: Logging node info for node node1 Oct 30 04:03:57.859: INFO: Node Info: &Node{ObjectMeta:{node1 ddef9269-94c5-4165-81fb-a3b0c4ac5c75 154563 0 2021-10-29 21:07:27 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-30 01:59:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2021-10-30 04:01:56 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:38 +0000 UTC,LastTransitionTime:2021-10-29 21:11:38 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 04:03:50 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 04:03:50 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 04:03:50 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 04:03:50 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3bf4179125e4495c89c046ed0ae7baf7,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:ce868148-dc5e-4c7c-a555-42ee929547f7,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432289,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:644a70516a26004c97d0d85c7fe1d0c3a67ea8ab7ddf4aff193d9f301670cf36 nginx:latest],SizeBytes:133277153,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:d226f3fd5f293ff513f53573a40c069b89d57d42338a1045b493bf702ac6b1f6 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.5],SizeBytes:60182158,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 04:03:57.860: INFO: Logging kubelet events for node node1 Oct 30 04:03:57.863: INFO: Logging pods the kubelet thinks is on node node1 Oct 30 04:03:57.880: INFO: kube-multus-ds-amd64-68wrz started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:57.880: INFO: Container kube-multus ready: true, restart count 1 Oct 30 04:03:57.880: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:57.880: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 04:03:57.880: INFO: pod-ready started at 2021-10-30 04:03:30 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:57.880: INFO: Container pod-readiness-gate ready: true, restart count 0 Oct 30 04:03:57.880: INFO: startup-ac4ac464-9d98-423d-a05c-f01fcdf15444 started at 2021-10-30 04:02:23 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:57.880: INFO: Container busybox ready: false, restart count 0 Oct 30 04:03:57.880: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:57.880: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 30 04:03:57.880: INFO: node-exporter-256wm started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 04:03:57.880: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 04:03:57.880: INFO: Container node-exporter ready: true, restart count 0 Oct 30 04:03:57.880: INFO: prometheus-k8s-0 started at 2021-10-29 21:21:17 +0000 UTC (0+4 container statuses recorded) Oct 30 04:03:57.880: INFO: Container config-reloader ready: true, restart count 0 Oct 30 04:03:57.880: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 30 04:03:57.880: INFO: Container grafana ready: true, restart count 0 Oct 30 04:03:57.880: INFO: Container prometheus ready: true, restart count 1 Oct 30 04:03:57.880: INFO: node-feature-discovery-worker-w5vdb started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:57.880: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 04:03:57.880: INFO: pod-back-off-image started at 2021-10-30 04:03:53 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:57.880: INFO: Container back-off ready: true, restart count 0 Oct 30 04:03:57.880: INFO: kube-flannel-phg88 started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 04:03:57.880: INFO: Init container install-cni ready: true, restart count 2 Oct 30 04:03:57.880: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 04:03:57.880: INFO: explicit-root-uid started at 2021-10-30 04:03:55 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:57.880: INFO: Container explicit-root-uid ready: false, restart count 0 Oct 30 04:03:57.880: INFO: kube-proxy-z5hqt started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:57.880: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 04:03:57.880: INFO: collectd-d45rv started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded) Oct 30 04:03:57.880: INFO: Container collectd ready: true, restart count 0 Oct 30 04:03:57.880: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 04:03:57.880: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 04:03:57.880: INFO: liveness-override-711696c9-3f2f-49fe-a954-f3061e07c0c8 started at 2021-10-30 04:03:51 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:57.880: INFO: Container agnhost-container ready: false, restart count 1 Oct 30 04:03:57.880: INFO: nginx-proxy-node1 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:57.880: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 04:03:57.880: INFO: cmk-init-discover-node1-n4mcc started at 2021-10-29 21:19:28 +0000 UTC (0+3 container statuses recorded) Oct 30 04:03:57.880: INFO: Container discover ready: false, restart count 0 Oct 30 04:03:57.880: INFO: Container init ready: false, restart count 0 Oct 30 04:03:57.880: INFO: Container install ready: false, restart count 0 Oct 30 04:03:57.880: INFO: cmk-89lqq started at 2021-10-29 21:20:10 +0000 UTC (0+2 container statuses recorded) Oct 30 04:03:57.880: INFO: Container nodereport ready: true, restart count 0 Oct 30 04:03:57.880: INFO: Container reconcile ready: true, restart count 0 Oct 30 04:03:57.880: INFO: liveness-fbf2ac47-2a77-490d-87d7-e659fc63b13f started at 2021-10-30 04:01:30 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:57.880: INFO: Container agnhost-container ready: true, restart count 0 W1030 04:03:57.891822 37 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 04:03:58.107: INFO: Latency metrics for node node1 Oct 30 04:03:58.107: INFO: Logging node info for node node2 Oct 30 04:03:58.110: INFO: Node Info: &Node{ObjectMeta:{node2 3b49ad19-ba56-4f4a-b1fa-eef102063de9 154705 0 2021-10-29 21:07:28 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-30 01:59:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2021-10-30 04:01:29 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:34 +0000 UTC,LastTransitionTime:2021-10-29 21:11:34 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 04:03:57 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 04:03:57 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 04:03:57 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 04:03:57 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7283436dd9e34722a6e4df817add95ed,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:c219e7bd-582b-4d6c-b379-1161acc70676,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:644a70516a26004c97d0d85c7fe1d0c3a67ea8ab7ddf4aff193d9f301670cf36 nginx:latest],SizeBytes:133277153,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:42321438,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 04:03:58.111: INFO: Logging kubelet events for node node2 Oct 30 04:03:58.113: INFO: Logging pods the kubelet thinks is on node node2 Oct 30 04:03:58.126: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:58.126: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 04:03:58.126: INFO: cmk-8bpbf started at 2021-10-29 21:20:11 +0000 UTC (0+2 container statuses recorded) Oct 30 04:03:58.126: INFO: Container nodereport ready: true, restart count 0 Oct 30 04:03:58.126: INFO: Container reconcile ready: true, restart count 0 Oct 30 04:03:58.126: INFO: node-exporter-r77s4 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 04:03:58.126: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 04:03:58.126: INFO: Container node-exporter ready: true, restart count 0 Oct 30 04:03:58.126: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh started at 2021-10-29 21:24:23 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:58.126: INFO: Container tas-extender ready: true, restart count 0 Oct 30 04:03:58.126: INFO: startup-6a7c6e92-2e1e-40dd-af8e-156dc3f6b81c started at 2021-10-30 04:03:55 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:58.126: INFO: Container busybox ready: false, restart count 0 Oct 30 04:03:58.126: INFO: collectd-flvhl started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded) Oct 30 04:03:58.126: INFO: Container collectd ready: true, restart count 0 Oct 30 04:03:58.126: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 04:03:58.126: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 04:03:58.126: INFO: kube-proxy-76285 started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:58.126: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 04:03:58.126: INFO: node-feature-discovery-worker-h6lcp started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:58.126: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 04:03:58.126: INFO: back-off-cap started at 2021-10-30 04:01:29 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:58.126: INFO: Container back-off-cap ready: false, restart count 4 Oct 30 04:03:58.126: INFO: liveness-exec started at 2021-10-30 04:03:06 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:58.126: INFO: Container liveness-exec ready: true, restart count 0 Oct 30 04:03:58.126: INFO: kube-flannel-f6s5v started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 04:03:58.126: INFO: Init container install-cni ready: true, restart count 2 Oct 30 04:03:58.126: INFO: Container kube-flannel ready: true, restart count 3 Oct 30 04:03:58.126: INFO: kube-multus-ds-amd64-7tvbl started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:58.126: INFO: Container kube-multus ready: true, restart count 1 Oct 30 04:03:58.126: INFO: cmk-webhook-6c9d5f8578-ffk66 started at 2021-10-29 21:20:11 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:58.126: INFO: Container cmk-webhook ready: true, restart count 0 Oct 30 04:03:58.126: INFO: nginx-proxy-node2 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:58.126: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 04:03:58.126: INFO: kubernetes-dashboard-785dcbb76d-pbjjt started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:58.126: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 30 04:03:58.126: INFO: cmk-init-discover-node2-2fmmt started at 2021-10-29 21:19:48 +0000 UTC (0+3 container statuses recorded) Oct 30 04:03:58.126: INFO: Container discover ready: false, restart count 0 Oct 30 04:03:58.126: INFO: Container init ready: false, restart count 0 Oct 30 04:03:58.126: INFO: Container install ready: false, restart count 0 Oct 30 04:03:58.126: INFO: busybox-7e3a774b-314c-4c1d-9929-b57f1461f4b3 started at 2021-10-30 04:03:06 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:58.126: INFO: Container busybox ready: true, restart count 0 Oct 30 04:03:58.126: INFO: liveness-http started at 2021-10-30 04:03:07 +0000 UTC (0+1 container statuses recorded) Oct 30 04:03:58.126: INFO: Container liveness-http ready: true, restart count 0 W1030 04:03:58.140185 37 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 04:03:58.365: INFO: Latency metrics for node node2 Oct 30 04:03:58.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2466" for this suite. •! Panic [2.893 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be ready immediately after startupProbe succeeds [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:400 Test Panicked runtime error: invalid memory address or nil pointer dereference /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 Full Stack Trace k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x109 panic(0x653b640, 0x9beb6a0) /usr/local/go/src/runtime/panic.go:965 +0x1b9 k8s.io/kubernetes/test/e2e/framework/pod.podContainerStarted.func1(0xc002084f0c, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/resource.go:334 +0x17b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0020b4e00, 0xc002084f00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitFor(0xc004903170, 0xc0020b4e00, 0xc0049117a0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:541 +0x128 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollInternal(0xc004903170, 0xc0020b4e00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:427 +0x87 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc004903170, 0xc0020b4e00, 0xc004903170, 0xc0020b4e00) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:452 +0x74 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc0020b4e00, 0x14, 0xc000e9daa0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodContainerStarted(0x779f8f8, 0xc000e02160, 0xc0047c4a68, 0x14, 0xc000e9daa0, 0x2c, 0x0, 0x45d964b800, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/wait.go:554 +0x92 k8s.io/kubernetes/test/e2e/common/node.glob..func2.18() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:417 +0x39e k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000703380) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000703380) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc000703380, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:03:55.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:04:01.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-292" for this suite. • [SLOW TEST:6.045 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":6,"skipped":1032,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:03:58.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Oct 30 04:03:58.510: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 STEP: creating secret and pod Oct 30 04:03:58.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-8603 create -f -' Oct 30 04:03:58.954: INFO: stderr: "" Oct 30 04:03:58.954: INFO: stdout: "secret/test-secret created\n" Oct 30 04:03:58.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-8603 create -f -' Oct 30 04:03:59.284: INFO: stderr: "" Oct 30 04:03:59.284: INFO: stdout: "pod/secret-test-pod created\n" STEP: checking if secret was read correctly Oct 30 04:04:03.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-8603 logs secret-test-pod test-container' Oct 30 04:04:03.465: INFO: stderr: "" Oct 30 04:04:03.465: INFO: stdout: "content of file \"/etc/secret-volume/data-1\": value-1\r\n\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:04:03.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-8603" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Secret should create a pod that reads a secret","total":-1,"completed":2,"skipped":165,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:04:01.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Oct 30 04:04:01.392: INFO: Waiting up to 5m0s for pod "security-context-6d479064-0011-45e8-a9f2-9dc848437c37" in namespace "security-context-4033" to be "Succeeded or Failed" Oct 30 04:04:01.396: INFO: Pod "security-context-6d479064-0011-45e8-a9f2-9dc848437c37": Phase="Pending", Reason="", readiness=false. Elapsed: 3.651788ms Oct 30 04:04:03.399: INFO: Pod "security-context-6d479064-0011-45e8-a9f2-9dc848437c37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006258762s Oct 30 04:04:05.401: INFO: Pod "security-context-6d479064-0011-45e8-a9f2-9dc848437c37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008734853s STEP: Saw pod success Oct 30 04:04:05.401: INFO: Pod "security-context-6d479064-0011-45e8-a9f2-9dc848437c37" satisfied condition "Succeeded or Failed" Oct 30 04:04:05.403: INFO: Trying to get logs from node node2 pod security-context-6d479064-0011-45e8-a9f2-9dc848437c37 container test-container: STEP: delete the pod Oct 30 04:04:05.415: INFO: Waiting for pod security-context-6d479064-0011-45e8-a9f2-9dc848437c37 to disappear Oct 30 04:04:05.418: INFO: Pod security-context-6d479064-0011-45e8-a9f2-9dc848437c37 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:04:05.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-4033" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]","total":-1,"completed":7,"skipped":1060,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:04:05.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should reject a Pod requesting a RuntimeClass with conflicting node selector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:41 [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:04:05.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-1889" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector","total":-1,"completed":8,"skipped":1065,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:03:06.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a failing exec liveness probe that took longer than the timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:254 STEP: Creating pod busybox-7e3a774b-314c-4c1d-9929-b57f1461f4b3 in namespace container-probe-344 Oct 30 04:03:16.796: INFO: Started pod busybox-7e3a774b-314c-4c1d-9929-b57f1461f4b3 in namespace container-probe-344 STEP: checking the pod's current state and verifying that restartCount is present Oct 30 04:03:16.799: INFO: Initial restart count of pod busybox-7e3a774b-314c-4c1d-9929-b57f1461f4b3 is 0 Oct 30 04:04:10.913: INFO: Restart count of pod container-probe-344/busybox-7e3a774b-314c-4c1d-9929-b57f1461f4b3 is now 1 (54.11437116s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:04:10.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-344" for this suite. • [SLOW TEST:64.177 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a failing exec liveness probe that took longer than the timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:254 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout","total":-1,"completed":8,"skipped":1067,"failed":0} Oct 30 04:04:10.932: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:04:03.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:53 [It] should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Oct 30 04:04:12.577: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:04:12.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1757" for this suite. • [SLOW TEST:9.081 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:51 should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Delete Grace Period should be submitted and removed","total":-1,"completed":3,"skipped":179,"failed":0} Oct 30 04:04:12.591: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:03:51.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:449 STEP: Creating pod liveness-override-711696c9-3f2f-49fe-a954-f3061e07c0c8 in namespace container-probe-2924 Oct 30 04:03:55.193: INFO: Started pod liveness-override-711696c9-3f2f-49fe-a954-f3061e07c0c8 in namespace container-probe-2924 STEP: checking the pod's current state and verifying that restartCount is present Oct 30 04:03:55.195: INFO: Initial restart count of pod liveness-override-711696c9-3f2f-49fe-a954-f3061e07c0c8 is 1 Oct 30 04:04:17.245: INFO: Restart count of pod container-probe-2924/liveness-override-711696c9-3f2f-49fe-a954-f3061e07c0c8 is now 2 (22.049707331s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:04:17.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2924" for this suite. • [SLOW TEST:26.105 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:449 ------------------------------ {"msg":"PASSED [sig-node] Probing container should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod]","total":-1,"completed":10,"skipped":1730,"failed":0} Oct 30 04:04:17.262: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:03:06.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Oct 30 04:03:06.532: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 Oct 30 04:03:06.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-2316 create -f -' Oct 30 04:03:06.943: INFO: stderr: "" Oct 30 04:03:06.943: INFO: stdout: "pod/liveness-exec created\n" Oct 30 04:03:06.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-2316 create -f -' Oct 30 04:03:07.253: INFO: stderr: "" Oct 30 04:03:07.253: INFO: stdout: "pod/liveness-http created\n" STEP: Check restarts Oct 30 04:03:17.267: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:03:19.261: INFO: Pod: liveness-http, restart count:0 Oct 30 04:03:19.271: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:03:21.266: INFO: Pod: liveness-http, restart count:0 Oct 30 04:03:21.274: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:03:23.271: INFO: Pod: liveness-http, restart count:0 Oct 30 04:03:23.277: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:03:25.276: INFO: Pod: liveness-http, restart count:0 Oct 30 04:03:25.280: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:03:27.280: INFO: Pod: liveness-http, restart count:0 Oct 30 04:03:27.283: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:03:29.284: INFO: Pod: liveness-http, restart count:0 Oct 30 04:03:29.286: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:03:31.289: INFO: Pod: liveness-http, restart count:0 Oct 30 04:03:31.289: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:03:33.292: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:03:33.292: INFO: Pod: liveness-http, restart count:0 Oct 30 04:03:35.296: INFO: Pod: liveness-http, restart count:0 Oct 30 04:03:35.296: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:03:37.301: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:03:37.301: INFO: Pod: liveness-http, restart count:0 Oct 30 04:03:39.306: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:03:39.307: INFO: Pod: liveness-http, restart count:0 Oct 30 04:03:41.311: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:03:41.311: INFO: Pod: liveness-http, restart count:0 Oct 30 04:03:43.315: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:03:43.315: INFO: Pod: liveness-http, restart count:0 Oct 30 04:03:45.318: INFO: Pod: liveness-http, restart count:0 Oct 30 04:03:45.318: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:03:47.324: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:03:47.324: INFO: Pod: liveness-http, restart count:0 Oct 30 04:03:49.327: INFO: Pod: liveness-http, restart count:0 Oct 30 04:03:49.328: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:03:51.331: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:03:51.332: INFO: Pod: liveness-http, restart count:0 Oct 30 04:03:53.335: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:03:53.335: INFO: Pod: liveness-http, restart count:0 Oct 30 04:03:55.338: INFO: Pod: liveness-http, restart count:0 Oct 30 04:03:55.338: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:03:57.342: INFO: Pod: liveness-http, restart count:0 Oct 30 04:03:57.342: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:03:59.346: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:03:59.346: INFO: Pod: liveness-http, restart count:1 Oct 30 04:03:59.346: INFO: Saw liveness-http restart, succeeded... Oct 30 04:04:01.350: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:04:03.353: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:04:05.356: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:04:07.361: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:04:09.365: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:04:11.370: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:04:13.374: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:04:15.379: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:04:17.384: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:04:19.389: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:04:21.394: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:04:23.398: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:04:25.402: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:04:27.407: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:04:29.412: INFO: Pod: liveness-exec, restart count:1 Oct 30 04:04:29.412: INFO: Saw liveness-exec restart, succeeded... [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:04:29.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-2316" for this suite. • [SLOW TEST:82.917 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Liveness /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:66 liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Liveness liveness pods should be automatically restarted","total":-1,"completed":4,"skipped":162,"failed":0} Oct 30 04:04:29.423: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:01:30.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W1030 04:01:30.130380 23 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 04:01:30.130: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 04:01:30.132: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 STEP: Creating pod liveness-fbf2ac47-2a77-490d-87d7-e659fc63b13f in namespace container-probe-2368 Oct 30 04:01:48.157: INFO: Started pod liveness-fbf2ac47-2a77-490d-87d7-e659fc63b13f in namespace container-probe-2368 STEP: checking the pod's current state and verifying that restartCount is present Oct 30 04:01:48.159: INFO: Initial restart count of pod liveness-fbf2ac47-2a77-490d-87d7-e659fc63b13f is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:05:49.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2368" for this suite. • [SLOW TEST:258.947 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe","total":-1,"completed":1,"skipped":130,"failed":0} Oct 30 04:05:49.057: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:02:23.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:342 STEP: Creating pod startup-ac4ac464-9d98-423d-a05c-f01fcdf15444 in namespace container-probe-6201 Oct 30 04:02:31.579: INFO: Started pod startup-ac4ac464-9d98-423d-a05c-f01fcdf15444 in namespace container-probe-6201 STEP: checking the pod's current state and verifying that restartCount is present Oct 30 04:02:31.581: INFO: Initial restart count of pod startup-ac4ac464-9d98-423d-a05c-f01fcdf15444 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:06:32.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6201" for this suite. • [SLOW TEST:248.637 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:342 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it","total":-1,"completed":4,"skipped":182,"failed":0} Oct 30 04:06:32.183: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:02:51.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 STEP: wait until node is ready Oct 30 04:02:51.918: INFO: Waiting up to 5m0s for node node1 condition Ready to be true STEP: wait until there is node lease STEP: verify NodeStatus report period is longer than lease duration Oct 30 04:02:52.929: INFO: node status heartbeat is unchanged for 1.004125389s, waiting for 1m20s Oct 30 04:02:53.930: INFO: node status heartbeat is unchanged for 2.005079136s, waiting for 1m20s Oct 30 04:02:54.929: INFO: node status heartbeat is unchanged for 3.003607788s, waiting for 1m20s Oct 30 04:02:55.929: INFO: node status heartbeat is unchanged for 4.004132155s, waiting for 1m20s Oct 30 04:02:56.931: INFO: node status heartbeat is unchanged for 5.005848125s, waiting for 1m20s Oct 30 04:02:57.929: INFO: node status heartbeat is unchanged for 6.003769522s, waiting for 1m20s Oct 30 04:02:58.930: INFO: node status heartbeat is unchanged for 7.004215282s, waiting for 1m20s Oct 30 04:02:59.930: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:02:59.937: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:02:49 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:02:59 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:02:49 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:02:59 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:02:49 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:02:59 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 30 04:03:00.929: INFO: node status heartbeat is unchanged for 999.514558ms, waiting for 1m20s Oct 30 04:03:01.931: INFO: node status heartbeat is unchanged for 2.001035329s, waiting for 1m20s Oct 30 04:03:02.930: INFO: node status heartbeat is unchanged for 3.00054219s, waiting for 1m20s Oct 30 04:03:03.930: INFO: node status heartbeat is unchanged for 4.000252917s, waiting for 1m20s Oct 30 04:03:04.929: INFO: node status heartbeat is unchanged for 4.998880064s, waiting for 1m20s Oct 30 04:03:05.931: INFO: node status heartbeat is unchanged for 6.001388047s, waiting for 1m20s Oct 30 04:03:06.929: INFO: node status heartbeat is unchanged for 6.998903208s, waiting for 1m20s Oct 30 04:03:07.932: INFO: node status heartbeat is unchanged for 8.002556479s, waiting for 1m20s Oct 30 04:03:08.929: INFO: node status heartbeat is unchanged for 8.999236349s, waiting for 1m20s Oct 30 04:03:09.929: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:03:09.933: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:02:59 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:03:09 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:02:59 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:03:09 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:02:59 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:03:09 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 30 04:03:10.931: INFO: node status heartbeat is unchanged for 1.00259416s, waiting for 1m20s Oct 30 04:03:11.932: INFO: node status heartbeat is unchanged for 2.003235553s, waiting for 1m20s Oct 30 04:03:12.932: INFO: node status heartbeat is unchanged for 3.003329568s, waiting for 1m20s Oct 30 04:03:13.930: INFO: node status heartbeat is unchanged for 4.000949645s, waiting for 1m20s Oct 30 04:03:14.931: INFO: node status heartbeat is unchanged for 5.002434658s, waiting for 1m20s Oct 30 04:03:15.931: INFO: node status heartbeat is unchanged for 6.002112289s, waiting for 1m20s Oct 30 04:03:16.929: INFO: node status heartbeat is unchanged for 7.000009618s, waiting for 1m20s Oct 30 04:03:17.928: INFO: node status heartbeat is unchanged for 7.999540208s, waiting for 1m20s Oct 30 04:03:18.930: INFO: node status heartbeat is unchanged for 9.000800249s, waiting for 1m20s Oct 30 04:03:19.929: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:03:19.934: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:03:09 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:03:19 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:03:09 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:03:19 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:03:09 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:03:19 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 30 04:03:20.929: INFO: node status heartbeat is unchanged for 999.893497ms, waiting for 1m20s Oct 30 04:03:21.930: INFO: node status heartbeat is unchanged for 2.000832292s, waiting for 1m20s Oct 30 04:03:22.928: INFO: node status heartbeat is unchanged for 2.998954616s, waiting for 1m20s Oct 30 04:03:23.932: INFO: node status heartbeat is unchanged for 4.002489984s, waiting for 1m20s Oct 30 04:03:24.929: INFO: node status heartbeat is unchanged for 5.000198154s, waiting for 1m20s Oct 30 04:03:25.930: INFO: node status heartbeat is unchanged for 6.001169937s, waiting for 1m20s Oct 30 04:03:26.930: INFO: node status heartbeat is unchanged for 7.000758967s, waiting for 1m20s Oct 30 04:03:27.930: INFO: node status heartbeat is unchanged for 8.000757724s, waiting for 1m20s Oct 30 04:03:28.930: INFO: node status heartbeat is unchanged for 9.000646253s, waiting for 1m20s Oct 30 04:03:29.929: INFO: node status heartbeat is unchanged for 10.000093618s, waiting for 1m20s Oct 30 04:03:30.930: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s Oct 30 04:03:30.934: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:03:19 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:03:30 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:03:19 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:03:30 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:03:19 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:03:30 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 30 04:03:31.930: INFO: node status heartbeat is unchanged for 999.939611ms, waiting for 1m20s Oct 30 04:03:32.932: INFO: node status heartbeat is unchanged for 2.002405329s, waiting for 1m20s Oct 30 04:03:33.929: INFO: node status heartbeat is unchanged for 2.999438644s, waiting for 1m20s Oct 30 04:03:34.929: INFO: node status heartbeat is unchanged for 3.998985999s, waiting for 1m20s Oct 30 04:03:35.929: INFO: node status heartbeat is unchanged for 4.999768314s, waiting for 1m20s Oct 30 04:03:36.930: INFO: node status heartbeat is unchanged for 6.000048821s, waiting for 1m20s Oct 30 04:03:37.930: INFO: node status heartbeat is unchanged for 7.000525324s, waiting for 1m20s Oct 30 04:03:38.930: INFO: node status heartbeat is unchanged for 8.000518978s, waiting for 1m20s Oct 30 04:03:39.929: INFO: node status heartbeat is unchanged for 8.999733212s, waiting for 1m20s Oct 30 04:03:40.929: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:03:40.934: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:03:30 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:03:40 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:03:30 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:03:40 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:03:30 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:03:40 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 30 04:03:41.929: INFO: node status heartbeat is unchanged for 1.000107901s, waiting for 1m20s Oct 30 04:03:42.931: INFO: node status heartbeat is unchanged for 2.00165292s, waiting for 1m20s Oct 30 04:03:43.931: INFO: node status heartbeat is unchanged for 3.001592657s, waiting for 1m20s Oct 30 04:03:44.929: INFO: node status heartbeat is unchanged for 3.999725813s, waiting for 1m20s Oct 30 04:03:45.930: INFO: node status heartbeat is unchanged for 5.000572519s, waiting for 1m20s Oct 30 04:03:46.929: INFO: node status heartbeat is unchanged for 5.999768813s, waiting for 1m20s Oct 30 04:03:47.930: INFO: node status heartbeat is unchanged for 7.000298467s, waiting for 1m20s Oct 30 04:03:48.930: INFO: node status heartbeat is unchanged for 8.000533052s, waiting for 1m20s Oct 30 04:03:49.931: INFO: node status heartbeat is unchanged for 9.001416489s, waiting for 1m20s Oct 30 04:03:50.932: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:03:50.937: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:03:40 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:03:50 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:03:40 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:03:50 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:03:40 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:03:50 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 30 04:03:51.929: INFO: node status heartbeat is unchanged for 996.860843ms, waiting for 1m20s Oct 30 04:03:52.928: INFO: node status heartbeat is unchanged for 1.996508028s, waiting for 1m20s Oct 30 04:03:53.930: INFO: node status heartbeat is unchanged for 2.997913921s, waiting for 1m20s Oct 30 04:03:54.929: INFO: node status heartbeat is unchanged for 3.997383138s, waiting for 1m20s Oct 30 04:03:55.930: INFO: node status heartbeat is unchanged for 4.997764917s, waiting for 1m20s Oct 30 04:03:56.931: INFO: node status heartbeat is unchanged for 5.998646894s, waiting for 1m20s Oct 30 04:03:57.929: INFO: node status heartbeat is unchanged for 6.997364198s, waiting for 1m20s Oct 30 04:03:58.929: INFO: node status heartbeat is unchanged for 7.996930897s, waiting for 1m20s Oct 30 04:03:59.931: INFO: node status heartbeat is unchanged for 8.998534849s, waiting for 1m20s Oct 30 04:04:00.931: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:04:00.936: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:03:50 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:04:00 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:03:50 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:04:00 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:03:50 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:04:00 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 30 04:04:01.931: INFO: node status heartbeat is unchanged for 999.65244ms, waiting for 1m20s Oct 30 04:04:02.932: INFO: node status heartbeat is unchanged for 2.00029027s, waiting for 1m20s Oct 30 04:04:03.930: INFO: node status heartbeat is unchanged for 2.998464308s, waiting for 1m20s Oct 30 04:04:04.929: INFO: node status heartbeat is unchanged for 3.998016221s, waiting for 1m20s Oct 30 04:04:05.930: INFO: node status heartbeat is unchanged for 4.998902335s, waiting for 1m20s Oct 30 04:04:06.929: INFO: node status heartbeat is unchanged for 5.997647327s, waiting for 1m20s Oct 30 04:04:07.930: INFO: node status heartbeat is unchanged for 6.998937822s, waiting for 1m20s Oct 30 04:04:08.929: INFO: node status heartbeat is unchanged for 7.997848822s, waiting for 1m20s Oct 30 04:04:09.930: INFO: node status heartbeat is unchanged for 8.998354666s, waiting for 1m20s Oct 30 04:04:10.930: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:04:10.934: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:04:00 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:04:10 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:04:00 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:04:10 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:04:00 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:04:10 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 30 04:04:11.929: INFO: node status heartbeat is unchanged for 999.139386ms, waiting for 1m20s Oct 30 04:04:12.929: INFO: node status heartbeat is unchanged for 1.998929412s, waiting for 1m20s Oct 30 04:04:13.929: INFO: node status heartbeat is unchanged for 2.999040708s, waiting for 1m20s Oct 30 04:04:14.930: INFO: node status heartbeat is unchanged for 4.000466948s, waiting for 1m20s Oct 30 04:04:15.930: INFO: node status heartbeat is unchanged for 5.000163997s, waiting for 1m20s Oct 30 04:04:16.929: INFO: node status heartbeat is unchanged for 5.999432114s, waiting for 1m20s Oct 30 04:04:17.929: INFO: node status heartbeat is unchanged for 6.999226394s, waiting for 1m20s Oct 30 04:04:18.929: INFO: node status heartbeat is unchanged for 7.999576512s, waiting for 1m20s Oct 30 04:04:19.930: INFO: node status heartbeat is unchanged for 8.999829207s, waiting for 1m20s Oct 30 04:04:20.929: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:04:20.934: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:04:10 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:04:20 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:04:10 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:04:20 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:04:10 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:04:20 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 30 04:04:21.930: INFO: node status heartbeat is unchanged for 1.000687337s, waiting for 1m20s Oct 30 04:04:22.930: INFO: node status heartbeat is unchanged for 2.001161475s, waiting for 1m20s Oct 30 04:04:23.928: INFO: node status heartbeat is unchanged for 2.999490254s, waiting for 1m20s Oct 30 04:04:24.929: INFO: node status heartbeat is unchanged for 4.000501052s, waiting for 1m20s Oct 30 04:04:25.929: INFO: node status heartbeat is unchanged for 4.999596566s, waiting for 1m20s Oct 30 04:04:26.931: INFO: node status heartbeat is unchanged for 6.001922488s, waiting for 1m20s Oct 30 04:04:27.931: INFO: node status heartbeat is unchanged for 7.002194849s, waiting for 1m20s Oct 30 04:04:28.929: INFO: node status heartbeat is unchanged for 7.999765387s, waiting for 1m20s Oct 30 04:04:29.929: INFO: node status heartbeat is unchanged for 9.000389793s, waiting for 1m20s Oct 30 04:04:30.929: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:04:30.934: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:04:20 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:04:30 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:04:20 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:04:30 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:04:20 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:04:30 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    NodeInfo: {MachineID: "3bf4179125e4495c89c046ed0ae7baf7", SystemUUID: "00CDA902-D022-E711-906E-0017A4403562", BootID: "ce868148-dc5e-4c7c-a555-42ee929547f7", KernelVersion: "3.10.0-1160.45.1.el7.x86_64", ...},    Images: []v1.ContainerImage{    ... // 23 identical elements    {Names: {"quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1"..., "quay.io/coreos/kube-rbac-proxy:v0.5.0"}, SizeBytes: 46626428},    {Names: {"localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e2"..., "nfvpe/sriov-device-plugin:latest", "localhost:30500/sriov-device-plugin:v3.3.2"}, SizeBytes: 42674030}, +  { +  Names: []string{ +  "k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34d"..., +  "k8s.gcr.io/e2e-test-images/nonroot:1.1", +  }, +  SizeBytes: 42321438, +  },    {Names: {"kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f"..., "kubernetesui/metrics-scraper:v1.0.6"}, SizeBytes: 34548789},    {Names: {"quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72"..., "quay.io/prometheus/node-exporter:v1.0.1"}, SizeBytes: 26430341},    ... // 11 identical elements    },    VolumesInUse: nil,    VolumesAttached: nil,    Config: nil,   } Oct 30 04:04:31.931: INFO: node status heartbeat is unchanged for 1.002434492s, waiting for 1m20s Oct 30 04:04:32.929: INFO: node status heartbeat is unchanged for 2.000333094s, waiting for 1m20s Oct 30 04:04:33.930: INFO: node status heartbeat is unchanged for 3.001069771s, waiting for 1m20s Oct 30 04:04:34.929: INFO: node status heartbeat is unchanged for 3.99991951s, waiting for 1m20s Oct 30 04:04:35.931: INFO: node status heartbeat is unchanged for 5.001986422s, waiting for 1m20s Oct 30 04:04:36.930: INFO: node status heartbeat is unchanged for 6.000747281s, waiting for 1m20s Oct 30 04:04:37.932: INFO: node status heartbeat is unchanged for 7.00316711s, waiting for 1m20s Oct 30 04:04:38.929: INFO: node status heartbeat is unchanged for 8.00008237s, waiting for 1m20s Oct 30 04:04:39.928: INFO: node status heartbeat is unchanged for 8.999054096s, waiting for 1m20s Oct 30 04:04:40.930: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:04:40.934: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:04:30 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:04:40 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:04:30 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:04:40 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:04:30 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:04:40 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 30 04:04:41.931: INFO: node status heartbeat is unchanged for 1.00159742s, waiting for 1m20s Oct 30 04:04:42.930: INFO: node status heartbeat is unchanged for 2.00030825s, waiting for 1m20s Oct 30 04:04:43.931: INFO: node status heartbeat is unchanged for 3.001409711s, waiting for 1m20s Oct 30 04:04:44.930: INFO: node status heartbeat is unchanged for 4.00028071s, waiting for 1m20s Oct 30 04:04:45.931: INFO: node status heartbeat is unchanged for 5.001871785s, waiting for 1m20s Oct 30 04:04:46.930: INFO: node status heartbeat is unchanged for 6.000202805s, waiting for 1m20s Oct 30 04:04:47.932: INFO: node status heartbeat is unchanged for 7.002414953s, waiting for 1m20s Oct 30 04:04:48.930: INFO: node status heartbeat is unchanged for 8.000391078s, waiting for 1m20s Oct 30 04:04:49.931: INFO: node status heartbeat is unchanged for 9.001062361s, waiting for 1m20s Oct 30 04:04:50.932: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:04:50.937: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:04:40 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:04:50 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:04:40 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:04:50 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:04:40 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:04:50 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 30 04:04:51.930: INFO: node status heartbeat is unchanged for 997.60329ms, waiting for 1m20s Oct 30 04:04:52.931: INFO: node status heartbeat is unchanged for 1.998690954s, waiting for 1m20s Oct 30 04:04:53.932: INFO: node status heartbeat is unchanged for 2.999715192s, waiting for 1m20s Oct 30 04:04:54.930: INFO: node status heartbeat is unchanged for 3.997353515s, waiting for 1m20s Oct 30 04:04:55.931: INFO: node status heartbeat is unchanged for 4.999036278s, waiting for 1m20s Oct 30 04:04:56.932: INFO: node status heartbeat is unchanged for 5.99964631s, waiting for 1m20s Oct 30 04:04:57.930: INFO: node status heartbeat is unchanged for 6.997968286s, waiting for 1m20s Oct 30 04:04:58.929: INFO: node status heartbeat is unchanged for 7.996952538s, waiting for 1m20s Oct 30 04:04:59.930: INFO: node status heartbeat is unchanged for 8.9974412s, waiting for 1m20s Oct 30 04:05:00.932: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:05:00.936: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:04:50 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:05:00 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:04:50 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:05:00 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:04:50 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:05:00 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 30 04:05:01.930: INFO: node status heartbeat is unchanged for 998.230915ms, waiting for 1m20s Oct 30 04:05:02.930: INFO: node status heartbeat is unchanged for 1.998727035s, waiting for 1m20s Oct 30 04:05:03.931: INFO: node status heartbeat is unchanged for 2.998951941s, waiting for 1m20s Oct 30 04:05:04.930: INFO: node status heartbeat is unchanged for 3.998881822s, waiting for 1m20s Oct 30 04:05:05.931: INFO: node status heartbeat is unchanged for 4.999204799s, waiting for 1m20s Oct 30 04:05:06.931: INFO: node status heartbeat is unchanged for 5.999293023s, waiting for 1m20s Oct 30 04:05:07.931: INFO: node status heartbeat is unchanged for 6.999080941s, waiting for 1m20s Oct 30 04:05:08.930: INFO: node status heartbeat is unchanged for 7.99842358s, waiting for 1m20s Oct 30 04:05:09.929: INFO: node status heartbeat is unchanged for 8.997155663s, waiting for 1m20s Oct 30 04:05:10.931: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:05:10.935: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:05:00 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:05:10 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:05:00 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:05:10 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:05:00 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:05:10 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 30 04:05:11.930: INFO: node status heartbeat is unchanged for 999.461727ms, waiting for 1m20s Oct 30 04:05:12.931: INFO: node status heartbeat is unchanged for 2.000337191s, waiting for 1m20s Oct 30 04:05:13.931: INFO: node status heartbeat is unchanged for 2.999844657s, waiting for 1m20s Oct 30 04:05:14.929: INFO: node status heartbeat is unchanged for 3.998520845s, waiting for 1m20s Oct 30 04:05:15.929: INFO: node status heartbeat is unchanged for 4.998709488s, waiting for 1m20s Oct 30 04:05:16.930: INFO: node status heartbeat is unchanged for 5.998729306s, waiting for 1m20s Oct 30 04:05:17.929: INFO: node status heartbeat is unchanged for 6.998692146s, waiting for 1m20s Oct 30 04:05:18.929: INFO: node status heartbeat is unchanged for 7.998584603s, waiting for 1m20s Oct 30 04:05:19.928: INFO: node status heartbeat is unchanged for 8.997649679s, waiting for 1m20s Oct 30 04:05:20.934: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:05:20.939: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:05:10 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:05:20 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:05:10 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:05:20 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:05:10 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:05:20 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 30 04:05:21.931: INFO: node status heartbeat is unchanged for 996.955741ms, waiting for 1m20s Oct 30 04:05:22.930: INFO: node status heartbeat is unchanged for 1.996186973s, waiting for 1m20s Oct 30 04:05:23.930: INFO: node status heartbeat is unchanged for 2.995564892s, waiting for 1m20s Oct 30 04:05:24.929: INFO: node status heartbeat is unchanged for 3.994676357s, waiting for 1m20s Oct 30 04:05:25.929: INFO: node status heartbeat is unchanged for 4.995212884s, waiting for 1m20s Oct 30 04:05:26.932: INFO: node status heartbeat is unchanged for 5.997887017s, waiting for 1m20s Oct 30 04:05:27.931: INFO: node status heartbeat is unchanged for 6.997048594s, waiting for 1m20s Oct 30 04:05:28.930: INFO: node status heartbeat is unchanged for 7.995595199s, waiting for 1m20s Oct 30 04:05:29.929: INFO: node status heartbeat is unchanged for 8.994473355s, waiting for 1m20s Oct 30 04:05:30.932: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:05:30.937: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:05:20 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:05:30 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:05:20 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:05:30 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:05:20 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:05:30 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 30 04:05:31.932: INFO: node status heartbeat is unchanged for 1.000402617s, waiting for 1m20s Oct 30 04:05:32.931: INFO: node status heartbeat is unchanged for 1.999400606s, waiting for 1m20s Oct 30 04:05:33.930: INFO: node status heartbeat is unchanged for 2.997504242s, waiting for 1m20s Oct 30 04:05:34.931: INFO: node status heartbeat is unchanged for 3.998550314s, waiting for 1m20s Oct 30 04:05:35.930: INFO: node status heartbeat is unchanged for 4.997715306s, waiting for 1m20s Oct 30 04:05:36.932: INFO: node status heartbeat is unchanged for 5.999970942s, waiting for 1m20s Oct 30 04:05:37.929: INFO: node status heartbeat is unchanged for 6.99722087s, waiting for 1m20s Oct 30 04:05:38.930: INFO: node status heartbeat is unchanged for 7.998288277s, waiting for 1m20s Oct 30 04:05:39.929: INFO: node status heartbeat is unchanged for 8.997207256s, waiting for 1m20s Oct 30 04:05:40.930: INFO: node status heartbeat is unchanged for 9.99778172s, waiting for 1m20s Oct 30 04:05:41.929: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:05:41.934: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:05:30 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:05:40 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:05:30 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:05:40 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:05:30 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:05:40 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 30 04:05:42.930: INFO: node status heartbeat is unchanged for 1.000461737s, waiting for 1m20s Oct 30 04:05:43.930: INFO: node status heartbeat is unchanged for 2.000349859s, waiting for 1m20s Oct 30 04:05:44.930: INFO: node status heartbeat is unchanged for 3.001178214s, waiting for 1m20s Oct 30 04:05:45.930: INFO: node status heartbeat is unchanged for 4.000947863s, waiting for 1m20s Oct 30 04:05:46.932: INFO: node status heartbeat is unchanged for 5.002624352s, waiting for 1m20s Oct 30 04:05:47.931: INFO: node status heartbeat is unchanged for 6.002198913s, waiting for 1m20s Oct 30 04:05:48.930: INFO: node status heartbeat is unchanged for 7.000768014s, waiting for 1m20s Oct 30 04:05:49.930: INFO: node status heartbeat is unchanged for 8.000478045s, waiting for 1m20s Oct 30 04:05:50.932: INFO: node status heartbeat is unchanged for 9.002396208s, waiting for 1m20s Oct 30 04:05:51.932: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s Oct 30 04:05:51.936: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:05:40 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:05:51 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:05:40 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:05:51 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:05:40 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:05:51 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 30 04:05:52.933: INFO: node status heartbeat is unchanged for 1.000948143s, waiting for 1m20s Oct 30 04:05:53.930: INFO: node status heartbeat is unchanged for 1.998805624s, waiting for 1m20s Oct 30 04:05:54.932: INFO: node status heartbeat is unchanged for 2.999997569s, waiting for 1m20s Oct 30 04:05:55.933: INFO: node status heartbeat is unchanged for 4.001254472s, waiting for 1m20s Oct 30 04:05:56.932: INFO: node status heartbeat is unchanged for 4.999905134s, waiting for 1m20s Oct 30 04:05:57.931: INFO: node status heartbeat is unchanged for 5.999020597s, waiting for 1m20s Oct 30 04:05:58.930: INFO: node status heartbeat is unchanged for 6.998817872s, waiting for 1m20s Oct 30 04:05:59.932: INFO: node status heartbeat is unchanged for 8.000239255s, waiting for 1m20s Oct 30 04:06:00.931: INFO: node status heartbeat is unchanged for 8.999529085s, waiting for 1m20s Oct 30 04:06:01.931: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:06:01.936: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:05:51 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:06:01 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:05:51 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:06:01 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:05:51 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:06:01 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 30 04:06:02.932: INFO: node status heartbeat is unchanged for 1.00091439s, waiting for 1m20s Oct 30 04:06:03.932: INFO: node status heartbeat is unchanged for 2.001694905s, waiting for 1m20s Oct 30 04:06:04.932: INFO: node status heartbeat is unchanged for 3.001336875s, waiting for 1m20s Oct 30 04:06:05.930: INFO: node status heartbeat is unchanged for 3.998885825s, waiting for 1m20s Oct 30 04:06:06.933: INFO: node status heartbeat is unchanged for 5.002002559s, waiting for 1m20s Oct 30 04:06:07.931: INFO: node status heartbeat is unchanged for 5.999956305s, waiting for 1m20s Oct 30 04:06:08.930: INFO: node status heartbeat is unchanged for 6.99890555s, waiting for 1m20s Oct 30 04:06:09.932: INFO: node status heartbeat is unchanged for 8.000884811s, waiting for 1m20s Oct 30 04:06:10.934: INFO: node status heartbeat is unchanged for 9.003236151s, waiting for 1m20s Oct 30 04:06:11.930: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:06:11.935: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:06:01 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:06:11 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:06:01 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:06:11 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:06:01 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:06:11 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 30 04:06:12.931: INFO: node status heartbeat is unchanged for 1.00057923s, waiting for 1m20s Oct 30 04:06:13.930: INFO: node status heartbeat is unchanged for 1.999564126s, waiting for 1m20s Oct 30 04:06:14.930: INFO: node status heartbeat is unchanged for 3.000376105s, waiting for 1m20s Oct 30 04:06:15.930: INFO: node status heartbeat is unchanged for 4.000092121s, waiting for 1m20s Oct 30 04:06:16.930: INFO: node status heartbeat is unchanged for 5.000329134s, waiting for 1m20s Oct 30 04:06:17.931: INFO: node status heartbeat is unchanged for 6.001090087s, waiting for 1m20s Oct 30 04:06:18.930: INFO: node status heartbeat is unchanged for 6.999759711s, waiting for 1m20s Oct 30 04:06:19.929: INFO: node status heartbeat is unchanged for 7.999223516s, waiting for 1m20s Oct 30 04:06:20.931: INFO: node status heartbeat is unchanged for 9.00069574s, waiting for 1m20s Oct 30 04:06:21.930: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:06:21.935: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:06:11 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:06:21 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:06:11 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:06:21 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:06:11 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:06:21 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 30 04:06:22.932: INFO: node status heartbeat is unchanged for 1.001566366s, waiting for 1m20s Oct 30 04:06:23.932: INFO: node status heartbeat is unchanged for 2.001655591s, waiting for 1m20s Oct 30 04:06:24.932: INFO: node status heartbeat is unchanged for 3.001309025s, waiting for 1m20s Oct 30 04:06:25.930: INFO: node status heartbeat is unchanged for 3.999876281s, waiting for 1m20s Oct 30 04:06:26.932: INFO: node status heartbeat is unchanged for 5.001648213s, waiting for 1m20s Oct 30 04:06:27.929: INFO: node status heartbeat is unchanged for 5.999029268s, waiting for 1m20s Oct 30 04:06:28.930: INFO: node status heartbeat is unchanged for 6.99928139s, waiting for 1m20s Oct 30 04:06:29.929: INFO: node status heartbeat is unchanged for 7.998900008s, waiting for 1m20s Oct 30 04:06:30.930: INFO: node status heartbeat is unchanged for 8.999366091s, waiting for 1m20s Oct 30 04:06:31.930: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:06:31.935: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:06:21 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:06:31 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:06:21 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:06:31 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:06:21 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:06:31 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 30 04:06:32.929: INFO: node status heartbeat is unchanged for 998.826629ms, waiting for 1m20s Oct 30 04:06:33.929: INFO: node status heartbeat is unchanged for 1.999276265s, waiting for 1m20s Oct 30 04:06:34.930: INFO: node status heartbeat is unchanged for 2.999775828s, waiting for 1m20s Oct 30 04:06:35.932: INFO: node status heartbeat is unchanged for 4.002219327s, waiting for 1m20s Oct 30 04:06:36.932: INFO: node status heartbeat is unchanged for 5.001725933s, waiting for 1m20s Oct 30 04:06:37.932: INFO: node status heartbeat is unchanged for 6.001561703s, waiting for 1m20s Oct 30 04:06:38.929: INFO: node status heartbeat is unchanged for 6.998986903s, waiting for 1m20s Oct 30 04:06:39.931: INFO: node status heartbeat is unchanged for 8.000855203s, waiting for 1m20s Oct 30 04:06:40.933: INFO: node status heartbeat is unchanged for 9.002999374s, waiting for 1m20s Oct 30 04:06:41.930: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:06:41.934: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:06:31 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:06:41 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:06:31 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:06:41 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:06:31 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:06:41 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 30 04:06:42.931: INFO: node status heartbeat is unchanged for 1.001657693s, waiting for 1m20s Oct 30 04:06:43.930: INFO: node status heartbeat is unchanged for 2.000944162s, waiting for 1m20s Oct 30 04:06:44.932: INFO: node status heartbeat is unchanged for 3.002794809s, waiting for 1m20s Oct 30 04:06:45.930: INFO: node status heartbeat is unchanged for 4.000650371s, waiting for 1m20s Oct 30 04:06:46.929: INFO: node status heartbeat is unchanged for 5.000005651s, waiting for 1m20s Oct 30 04:06:47.931: INFO: node status heartbeat is unchanged for 6.001114014s, waiting for 1m20s Oct 30 04:06:48.930: INFO: node status heartbeat is unchanged for 7.000840502s, waiting for 1m20s Oct 30 04:06:49.931: INFO: node status heartbeat is unchanged for 8.001252331s, waiting for 1m20s Oct 30 04:06:50.930: INFO: node status heartbeat is unchanged for 9.000739374s, waiting for 1m20s Oct 30 04:06:51.929: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:06:51.933: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:06:41 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:06:51 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:06:41 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:06:51 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:06:41 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:06:51 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 30 04:06:52.929: INFO: node status heartbeat is unchanged for 1.000529231s, waiting for 1m20s Oct 30 04:06:53.929: INFO: node status heartbeat is unchanged for 1.999978883s, waiting for 1m20s Oct 30 04:06:54.929: INFO: node status heartbeat is unchanged for 3.000789635s, waiting for 1m20s Oct 30 04:06:55.929: INFO: node status heartbeat is unchanged for 4.000279496s, waiting for 1m20s Oct 30 04:06:56.929: INFO: node status heartbeat is unchanged for 5.000543032s, waiting for 1m20s Oct 30 04:06:57.930: INFO: node status heartbeat is unchanged for 6.00137114s, waiting for 1m20s Oct 30 04:06:58.929: INFO: node status heartbeat is unchanged for 7.000397127s, waiting for 1m20s Oct 30 04:06:59.929: INFO: node status heartbeat is unchanged for 8.000526427s, waiting for 1m20s Oct 30 04:07:00.930: INFO: node status heartbeat is unchanged for 9.001145183s, waiting for 1m20s Oct 30 04:07:01.930: INFO: node status heartbeat is unchanged for 10.001021436s, waiting for 1m20s Oct 30 04:07:02.929: INFO: node status heartbeat is unchanged for 11.00074637s, waiting for 1m20s Oct 30 04:07:03.929: INFO: node status heartbeat changed in 12s (with other status changes), waiting for 40s Oct 30 04:07:03.934: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:06:51 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:07:03 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:06:51 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:07:03 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:06:51 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:07:03 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 30 04:07:04.929: INFO: node status heartbeat is unchanged for 1.000442446s, waiting for 1m20s Oct 30 04:07:05.929: INFO: node status heartbeat is unchanged for 2.000043908s, waiting for 1m20s Oct 30 04:07:06.930: INFO: node status heartbeat is unchanged for 3.000955621s, waiting for 1m20s Oct 30 04:07:07.931: INFO: node status heartbeat is unchanged for 4.00155242s, waiting for 1m20s Oct 30 04:07:08.929: INFO: node status heartbeat is unchanged for 5.000071959s, waiting for 1m20s Oct 30 04:07:09.931: INFO: node status heartbeat is unchanged for 6.002221047s, waiting for 1m20s Oct 30 04:07:10.933: INFO: node status heartbeat is unchanged for 7.004343239s, waiting for 1m20s Oct 30 04:07:11.930: INFO: node status heartbeat is unchanged for 8.000814174s, waiting for 1m20s Oct 30 04:07:12.930: INFO: node status heartbeat is unchanged for 9.001000743s, waiting for 1m20s Oct 30 04:07:13.928: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:07:13.933: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:07:03 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:07:13 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:07:03 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:07:13 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:07:03 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:07:13 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 30 04:07:14.930: INFO: node status heartbeat is unchanged for 1.001185143s, waiting for 1m20s Oct 30 04:07:15.930: INFO: node status heartbeat is unchanged for 2.001393615s, waiting for 1m20s Oct 30 04:07:16.929: INFO: node status heartbeat is unchanged for 3.000613124s, waiting for 1m20s Oct 30 04:07:17.930: INFO: node status heartbeat is unchanged for 4.001205271s, waiting for 1m20s Oct 30 04:07:18.930: INFO: node status heartbeat is unchanged for 5.001735857s, waiting for 1m20s Oct 30 04:07:19.929: INFO: node status heartbeat is unchanged for 6.00040391s, waiting for 1m20s Oct 30 04:07:20.929: INFO: node status heartbeat is unchanged for 7.000792156s, waiting for 1m20s Oct 30 04:07:21.930: INFO: node status heartbeat is unchanged for 8.001135279s, waiting for 1m20s Oct 30 04:07:22.929: INFO: node status heartbeat is unchanged for 9.000786171s, waiting for 1m20s Oct 30 04:07:23.930: INFO: node status heartbeat is unchanged for 10.001278264s, waiting for 1m20s Oct 30 04:07:24.931: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:07:24.936: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:07:13 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:07:23 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:07:13 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:07:23 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:07:13 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:07:23 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 30 04:07:25.931: INFO: node status heartbeat is unchanged for 1.000092821s, waiting for 1m20s Oct 30 04:07:26.932: INFO: node status heartbeat is unchanged for 2.000518452s, waiting for 1m20s Oct 30 04:07:27.929: INFO: node status heartbeat is unchanged for 2.997700803s, waiting for 1m20s Oct 30 04:07:28.930: INFO: node status heartbeat is unchanged for 3.998213702s, waiting for 1m20s Oct 30 04:07:29.930: INFO: node status heartbeat is unchanged for 4.998290699s, waiting for 1m20s Oct 30 04:07:30.930: INFO: node status heartbeat is unchanged for 5.998939522s, waiting for 1m20s Oct 30 04:07:31.929: INFO: node status heartbeat is unchanged for 6.997516872s, waiting for 1m20s Oct 30 04:07:32.929: INFO: node status heartbeat is unchanged for 7.997729143s, waiting for 1m20s Oct 30 04:07:33.929: INFO: node status heartbeat is unchanged for 8.997734817s, waiting for 1m20s Oct 30 04:07:34.930: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:07:34.934: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:07:23 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:07:33 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:07:23 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:07:33 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:07:23 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:07:33 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 30 04:07:35.930: INFO: node status heartbeat is unchanged for 999.76507ms, waiting for 1m20s Oct 30 04:07:36.930: INFO: node status heartbeat is unchanged for 2.000555396s, waiting for 1m20s Oct 30 04:07:37.932: INFO: node status heartbeat is unchanged for 3.001773407s, waiting for 1m20s Oct 30 04:07:38.929: INFO: node status heartbeat is unchanged for 3.999626244s, waiting for 1m20s Oct 30 04:07:39.930: INFO: node status heartbeat is unchanged for 5.000318308s, waiting for 1m20s Oct 30 04:07:40.934: INFO: node status heartbeat is unchanged for 6.003948296s, waiting for 1m20s Oct 30 04:07:41.929: INFO: node status heartbeat is unchanged for 6.99935468s, waiting for 1m20s Oct 30 04:07:42.930: INFO: node status heartbeat is unchanged for 8.000126632s, waiting for 1m20s Oct 30 04:07:43.930: INFO: node status heartbeat is unchanged for 8.999962066s, waiting for 1m20s Oct 30 04:07:44.930: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s Oct 30 04:07:44.934: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:07:33 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:07:44 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:07:33 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:07:44 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:07:33 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:07:44 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 30 04:07:45.929: INFO: node status heartbeat is unchanged for 999.480046ms, waiting for 1m20s Oct 30 04:07:46.929: INFO: node status heartbeat is unchanged for 1.999307244s, waiting for 1m20s Oct 30 04:07:47.930: INFO: node status heartbeat is unchanged for 3.000372546s, waiting for 1m20s Oct 30 04:07:48.930: INFO: node status heartbeat is unchanged for 4.000245602s, waiting for 1m20s Oct 30 04:07:49.929: INFO: node status heartbeat is unchanged for 4.999184417s, waiting for 1m20s Oct 30 04:07:50.930: INFO: node status heartbeat is unchanged for 6.000370705s, waiting for 1m20s Oct 30 04:07:51.929: INFO: node status heartbeat is unchanged for 6.999585054s, waiting for 1m20s Oct 30 04:07:51.932: INFO: node status heartbeat is unchanged for 7.00244463s, waiting for 1m20s STEP: verify node is still in ready status even though node status report is infrequent [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:07:51.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-3028" for this suite. • [SLOW TEST:300.053 seconds] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when the NodeLease feature is enabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:49 the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently","total":-1,"completed":6,"skipped":804,"failed":0} Oct 30 04:07:51.950: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:03:53.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:681 Oct 30 04:03:53.293: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Oct 30 04:03:55.298: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Oct 30 04:03:57.299: INFO: The status of Pod pod-back-off-image is Running (Ready = true) STEP: getting restart delay-0 Oct 30 04:05:41.467: INFO: getRestartDelay: restartCount = 4, finishedAt=2021-10-30 04:04:58 +0000 UTC restartedAt=2021-10-30 04:05:40 +0000 UTC (42s) STEP: getting restart delay-1 Oct 30 04:07:14.850: INFO: getRestartDelay: restartCount = 5, finishedAt=2021-10-30 04:05:45 +0000 UTC restartedAt=2021-10-30 04:07:13 +0000 UTC (1m28s) STEP: getting restart delay-2 Oct 30 04:10:01.553: INFO: getRestartDelay: restartCount = 6, finishedAt=2021-10-30 04:07:18 +0000 UTC restartedAt=2021-10-30 04:10:01 +0000 UTC (2m43s) STEP: updating the image Oct 30 04:10:02.061: INFO: Successfully updated pod "pod-back-off-image" STEP: get restart delay after image update Oct 30 04:10:29.156: INFO: getRestartDelay: restartCount = 8, finishedAt=2021-10-30 04:10:12 +0000 UTC restartedAt=2021-10-30 04:10:28 +0000 UTC (16s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:10:29.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8334" for this suite. • [SLOW TEST:395.910 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:681 ------------------------------ {"msg":"PASSED [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]","total":-1,"completed":7,"skipped":338,"failed":0} Oct 30 04:10:29.169: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:01:29.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods W1030 04:01:29.954110 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 04:01:29.954: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 04:01:29.956: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:722 Oct 30 04:01:29.974: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Oct 30 04:01:31.978: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Oct 30 04:01:33.979: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Oct 30 04:01:35.978: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Oct 30 04:01:37.979: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Oct 30 04:01:39.981: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Oct 30 04:01:41.981: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Oct 30 04:01:43.979: INFO: The status of Pod back-off-cap is Running (Ready = true) STEP: getting restart delay when capped Oct 30 04:13:01.320: INFO: getRestartDelay: restartCount = 7, finishedAt=2021-10-30 04:07:54 +0000 UTC restartedAt=2021-10-30 04:13:00 +0000 UTC (5m6s) Oct 30 04:18:18.693: INFO: getRestartDelay: restartCount = 8, finishedAt=2021-10-30 04:13:05 +0000 UTC restartedAt=2021-10-30 04:18:17 +0000 UTC (5m12s) Oct 30 04:23:28.039: INFO: getRestartDelay: restartCount = 9, finishedAt=2021-10-30 04:18:22 +0000 UTC restartedAt=2021-10-30 04:23:26 +0000 UTC (5m4s) STEP: getting restart delay after a capped delay Oct 30 04:28:44.384: INFO: getRestartDelay: restartCount = 10, finishedAt=2021-10-30 04:23:31 +0000 UTC restartedAt=2021-10-30 04:28:43 +0000 UTC (5m12s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:28:44.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9299" for this suite. • [SLOW TEST:1634.468 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:722 ------------------------------ {"msg":"PASSED [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]","total":-1,"completed":1,"skipped":44,"failed":0} Oct 30 04:28:44.395: INFO: Running AfterSuite actions on all nodes Oct 30 04:04:05.508: INFO: Running AfterSuite actions on all nodes Oct 30 04:28:44.441: INFO: Running AfterSuite actions on node 1 Oct 30 04:28:44.441: INFO: Skipping dumping logs from cluster Summarizing 1 Failure: [Panic!] [sig-node] Probing container [It] should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 Ran 53 of 5770 Specs in 1634.819 seconds FAIL! -- 52 Passed | 1 Failed | 0 Pending | 5717 Skipped Ginkgo ran 1 suite in 27m16.337604542s Test Suite Failed