Running Suite: Kubernetes e2e suite =================================== Random Seed: 1654298345 - Will randomize all specs Will run 5773 specs Running in parallel across 10 nodes Jun 3 23:19:07.312: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:19:07.315: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 3 23:19:07.343: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 3 23:19:07.395: INFO: The status of Pod cmk-init-discover-node1-n75dv is Succeeded, skipping waiting Jun 3 23:19:07.395: INFO: The status of Pod cmk-init-discover-node2-xvf8p is Succeeded, skipping waiting Jun 3 23:19:07.395: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 3 23:19:07.395: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Jun 3 23:19:07.395: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 3 23:19:07.413: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Jun 3 23:19:07.413: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Jun 3 23:19:07.413: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Jun 3 23:19:07.413: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Jun 3 23:19:07.413: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Jun 3 23:19:07.413: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Jun 3 23:19:07.413: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Jun 3 23:19:07.413: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 3 23:19:07.413: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Jun 3 23:19:07.413: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Jun 3 23:19:07.413: INFO: e2e test version: v1.21.9 Jun 3 23:19:07.414: INFO: kube-apiserver version: v1.21.1 Jun 3 23:19:07.414: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:19:07.421: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSS ------------------------------ Jun 3 23:19:07.430: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:19:07.450: INFO: Cluster IP family: ipv4 SSSSSS ------------------------------ Jun 3 23:19:07.438: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:19:07.457: INFO: Cluster IP family: ipv4 SSSSSSSSSSS ------------------------------ Jun 3 23:19:07.444: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:19:07.465: INFO: Cluster IP family: ipv4 Jun 3 23:19:07.443: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:19:07.465: INFO: Cluster IP family: ipv4 SS ------------------------------ Jun 3 23:19:07.448: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:19:07.468: INFO: Cluster IP family: ipv4 SSSSS ------------------------------ Jun 3 23:19:07.450: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:19:07.470: INFO: Cluster IP family: ipv4 SSSSSSSSSSS ------------------------------ Jun 3 23:19:07.454: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:19:07.475: INFO: Cluster IP family: ipv4 Jun 3 23:19:07.456: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:19:07.476: INFO: Cluster IP family: ipv4 SSSSSSSS ------------------------------ Jun 3 23:19:07.458: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:19:07.479: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:19:07.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-problem-detector W0603 23:19:07.991307 33 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 3 23:19:07.991: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 3 23:19:07.993: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:52 Jun 3 23:19:07.995: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:19:07.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-problem-detector-191" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.036 seconds] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should run without error [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:60 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:53 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:19:08.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test W0603 23:19:08.310440 34 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 3 23:19:08.310: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 3 23:19:08.312: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] should have OwnerReferences set /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:88 [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:19:08.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-9876" for this suite. •SS ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":1,"skipped":321,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:19:07.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api W0603 23:19:07.736433 27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 3 23:19:07.737: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 3 23:19:07.740: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109 STEP: Creating a pod to test downward api env vars Jun 3 23:19:07.754: INFO: Waiting up to 5m0s for pod "downward-api-6e60cb13-15c7-4a9b-b589-60539b50a94c" in namespace "downward-api-9913" to be "Succeeded or Failed" Jun 3 23:19:07.756: INFO: Pod "downward-api-6e60cb13-15c7-4a9b-b589-60539b50a94c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085972ms Jun 3 23:19:09.761: INFO: Pod "downward-api-6e60cb13-15c7-4a9b-b589-60539b50a94c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006685379s Jun 3 23:19:11.765: INFO: Pod "downward-api-6e60cb13-15c7-4a9b-b589-60539b50a94c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010823686s Jun 3 23:19:13.770: INFO: Pod "downward-api-6e60cb13-15c7-4a9b-b589-60539b50a94c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015905419s Jun 3 23:19:15.774: INFO: Pod "downward-api-6e60cb13-15c7-4a9b-b589-60539b50a94c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.019196196s STEP: Saw pod success Jun 3 23:19:15.774: INFO: Pod "downward-api-6e60cb13-15c7-4a9b-b589-60539b50a94c" satisfied condition "Succeeded or Failed" Jun 3 23:19:15.776: INFO: Trying to get logs from node node2 pod downward-api-6e60cb13-15c7-4a9b-b589-60539b50a94c container dapi-container: STEP: delete the pod Jun 3 23:19:15.788: INFO: Waiting for pod downward-api-6e60cb13-15c7-4a9b-b589-60539b50a94c to disappear Jun 3 23:19:15.791: INFO: Pod downward-api-6e60cb13-15c7-4a9b-b589-60539b50a94c no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:19:15.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9913" for this suite. • [SLOW TEST:8.090 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":1,"skipped":61,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:19:15.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:201 STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node STEP: Watching for error events or started pod STEP: Checking that the pod was rejected [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:19:17.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-9412" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]","total":-1,"completed":2,"skipped":103,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:19:17.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crictl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:33 Jun 3 23:19:18.017: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:19:18.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crictl-7790" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.035 seconds] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should be able to run crictl on the node [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:40 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:35 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:19:07.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test W0603 23:19:07.637652 31 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 3 23:19:07.637: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 3 23:19:07.640: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 Jun 3 23:19:07.653: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-3625411c-5dae-42a9-b574-da0e3dc1ce0c" in namespace "security-context-test-4985" to be "Succeeded or Failed" Jun 3 23:19:07.656: INFO: Pod "busybox-privileged-true-3625411c-5dae-42a9-b574-da0e3dc1ce0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212632ms Jun 3 23:19:09.658: INFO: Pod "busybox-privileged-true-3625411c-5dae-42a9-b574-da0e3dc1ce0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004910539s Jun 3 23:19:11.664: INFO: Pod "busybox-privileged-true-3625411c-5dae-42a9-b574-da0e3dc1ce0c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010314675s Jun 3 23:19:13.672: INFO: Pod "busybox-privileged-true-3625411c-5dae-42a9-b574-da0e3dc1ce0c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.018343553s Jun 3 23:19:15.674: INFO: Pod "busybox-privileged-true-3625411c-5dae-42a9-b574-da0e3dc1ce0c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020974814s Jun 3 23:19:17.678: INFO: Pod "busybox-privileged-true-3625411c-5dae-42a9-b574-da0e3dc1ce0c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.024371939s Jun 3 23:19:19.685: INFO: Pod "busybox-privileged-true-3625411c-5dae-42a9-b574-da0e3dc1ce0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.031213431s Jun 3 23:19:19.685: INFO: Pod "busybox-privileged-true-3625411c-5dae-42a9-b574-da0e3dc1ce0c" satisfied condition "Succeeded or Failed" Jun 3 23:19:19.691: INFO: Got logs for pod "busybox-privileged-true-3625411c-5dae-42a9-b574-da0e3dc1ce0c": "" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:19:19.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4985" for this suite. • [SLOW TEST:12.084 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with privileged /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232 should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":1,"skipped":42,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:19:07.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context W0603 23:19:07.627027 32 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 3 23:19:07.627: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 3 23:19:07.629: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Jun 3 23:19:07.644: INFO: Waiting up to 5m0s for pod "security-context-a78dc86e-377f-4df2-a9f6-fbb8ff225a84" in namespace "security-context-1619" to be "Succeeded or Failed" Jun 3 23:19:07.654: INFO: Pod "security-context-a78dc86e-377f-4df2-a9f6-fbb8ff225a84": Phase="Pending", Reason="", readiness=false. Elapsed: 9.806336ms Jun 3 23:19:09.658: INFO: Pod "security-context-a78dc86e-377f-4df2-a9f6-fbb8ff225a84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01393713s Jun 3 23:19:11.661: INFO: Pod "security-context-a78dc86e-377f-4df2-a9f6-fbb8ff225a84": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017481503s Jun 3 23:19:13.667: INFO: Pod "security-context-a78dc86e-377f-4df2-a9f6-fbb8ff225a84": Phase="Pending", Reason="", readiness=false. Elapsed: 6.022855442s Jun 3 23:19:15.670: INFO: Pod "security-context-a78dc86e-377f-4df2-a9f6-fbb8ff225a84": Phase="Pending", Reason="", readiness=false. Elapsed: 8.026615691s Jun 3 23:19:17.674: INFO: Pod "security-context-a78dc86e-377f-4df2-a9f6-fbb8ff225a84": Phase="Pending", Reason="", readiness=false. Elapsed: 10.030300202s Jun 3 23:19:19.679: INFO: Pod "security-context-a78dc86e-377f-4df2-a9f6-fbb8ff225a84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.035020981s STEP: Saw pod success Jun 3 23:19:19.679: INFO: Pod "security-context-a78dc86e-377f-4df2-a9f6-fbb8ff225a84" satisfied condition "Succeeded or Failed" Jun 3 23:19:19.681: INFO: Trying to get logs from node node2 pod security-context-a78dc86e-377f-4df2-a9f6-fbb8ff225a84 container test-container: STEP: delete the pod Jun 3 23:19:19.693: INFO: Waiting for pod security-context-a78dc86e-377f-4df2-a9f6-fbb8ff225a84 to disappear Jun 3 23:19:19.695: INFO: Pod security-context-a78dc86e-377f-4df2-a9f6-fbb8ff225a84 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:19:19.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-1619" for this suite. • [SLOW TEST:12.101 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]","total":-1,"completed":1,"skipped":35,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:19:07.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples W0603 23:19:07.515326 30 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 3 23:19:07.515: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 3 23:19:07.518: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Jun 3 23:19:07.527: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 STEP: creating secret and pod Jun 3 23:19:07.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-7959 create -f -' Jun 3 23:19:08.111: INFO: stderr: "" Jun 3 23:19:08.111: INFO: stdout: "secret/test-secret created\n" Jun 3 23:19:08.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-7959 create -f -' Jun 3 23:19:08.459: INFO: stderr: "" Jun 3 23:19:08.459: INFO: stdout: "pod/secret-test-pod created\n" STEP: checking if secret was read correctly Jun 3 23:19:22.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-7959 logs secret-test-pod test-container' Jun 3 23:19:22.748: INFO: stderr: "" Jun 3 23:19:22.748: INFO: stdout: "content of file \"/etc/secret-volume/data-1\": value-1\r\n\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:19:22.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-7959" for this suite. • [SLOW TEST:15.266 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:113 should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Secret should create a pod that reads a secret","total":-1,"completed":1,"skipped":5,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:19:07.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods W0603 23:19:07.670838 25 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 3 23:19:07.671: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 3 23:19:07.672: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:446 [It] should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 STEP: creating the pod that should always exit 0 STEP: submitting the pod to kubernetes Jun 3 23:19:07.687: INFO: Waiting up to 5m0s for pod "pod-always-succeed90c4b8fc-d649-4661-af9c-cb239ff8789b" in namespace "pods-5537" to be "Succeeded or Failed" Jun 3 23:19:07.690: INFO: Pod "pod-always-succeed90c4b8fc-d649-4661-af9c-cb239ff8789b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.632682ms Jun 3 23:19:09.694: INFO: Pod "pod-always-succeed90c4b8fc-d649-4661-af9c-cb239ff8789b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006702771s Jun 3 23:19:11.698: INFO: Pod "pod-always-succeed90c4b8fc-d649-4661-af9c-cb239ff8789b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01052063s Jun 3 23:19:13.702: INFO: Pod "pod-always-succeed90c4b8fc-d649-4661-af9c-cb239ff8789b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014176582s Jun 3 23:19:15.706: INFO: Pod "pod-always-succeed90c4b8fc-d649-4661-af9c-cb239ff8789b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018420235s Jun 3 23:19:17.711: INFO: Pod "pod-always-succeed90c4b8fc-d649-4661-af9c-cb239ff8789b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.023042196s Jun 3 23:19:19.714: INFO: Pod "pod-always-succeed90c4b8fc-d649-4661-af9c-cb239ff8789b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.026197628s Jun 3 23:19:21.719: INFO: Pod "pod-always-succeed90c4b8fc-d649-4661-af9c-cb239ff8789b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.031281186s Jun 3 23:19:23.723: INFO: Pod "pod-always-succeed90c4b8fc-d649-4661-af9c-cb239ff8789b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.035786422s STEP: Saw pod success Jun 3 23:19:23.723: INFO: Pod "pod-always-succeed90c4b8fc-d649-4661-af9c-cb239ff8789b" satisfied condition "Succeeded or Failed" STEP: Getting events about the pod STEP: Checking events about the pod STEP: deleting the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:19:25.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5537" for this suite. • [SLOW TEST:18.094 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:444 should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done","total":-1,"completed":1,"skipped":37,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:19:08.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:400 E0603 23:19:20.593977 34 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 210 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x654af00, 0x9c066c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86 panic(0x654af00, 0x9c066c0) /usr/local/go/src/runtime/panic.go:965 +0x1b9 k8s.io/kubernetes/test/e2e/framework/pod.podContainerStarted.func1(0xc001332f0c, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/resource.go:334 +0x17b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc003ddc6c0, 0xc001332f00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitFor(0xc003e04090, 0xc003ddc6c0, 0xc003b7ea20, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:541 +0x128 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollInternal(0xc003e04090, 0xc003ddc6c0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:427 +0x87 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc003e04090, 0xc003ddc6c0, 0xc003e04090, 0xc003ddc6c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:452 +0x74 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc003ddc6c0, 0x14, 0xc003df83f0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodContainerStarted(0x77b33d8, 0xc003d25e40, 0xc003389ba8, 0x14, 0xc003df83f0, 0x2c, 0x0, 0x45d964b800, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/wait.go:554 +0x92 k8s.io/kubernetes/test/e2e/common/node.glob..func2.18() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:417 +0x39e k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0008a62a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0008a62a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc0001e2500, 0x76a2fe0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc0045484b0, 0x0, 0x76a2fe0, 0xc000190840) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:215 +0x72f k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc0045484b0, 0x76a2fe0, 0xc000190840) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc001222000, 0xc0045484b0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc001222000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x147 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc001222000, 0xc004bb4050) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000194280, 0x7fbacf37f998, 0xc0004fe780, 0x6f170c8, 0x14, 0xc00408a120, 0x3, 0x3, 0x7759478, 0xc000190840, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x76a80c0, 0xc0004fe780, 0x6f170c8, 0x14, 0xc003cb1340, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x218 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x76a80c0, 0xc0004fe780, 0x6f170c8, 0x14, 0xc002653160, 0x2, 0x2, 0x25) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0004fe780) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0004fe780) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0004fe780, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "container-probe-4501". STEP: Found 2 events. Jun 3 23:19:20.597: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for startup-73761899-72a1-4d1f-8e36-e85677500671: { } Scheduled: Successfully assigned container-probe-4501/startup-73761899-72a1-4d1f-8e36-e85677500671 to node2 Jun 3 23:19:20.597: INFO: At 2022-06-03 23:19:20 +0000 UTC - event for startup-73761899-72a1-4d1f-8e36-e85677500671: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" Jun 3 23:19:20.599: INFO: POD NODE PHASE GRACE CONDITIONS Jun 3 23:19:20.599: INFO: startup-73761899-72a1-4d1f-8e36-e85677500671 node2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 23:19:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-03 23:19:08 +0000 UTC ContainersNotReady containers with unready status: [busybox]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-03 23:19:08 +0000 UTC ContainersNotReady containers with unready status: [busybox]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 23:19:08 +0000 UTC }] Jun 3 23:19:20.599: INFO: Jun 3 23:19:20.604: INFO: Logging node info for node master1 Jun 3 23:19:20.607: INFO: Node Info: &Node{ObjectMeta:{master1 4d289319-b343-4e96-a789-1a1cbeac007b 76245 0 2022-06-03 19:57:53 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-03 19:57:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-06-03 19:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-06-03 20:05:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:30 +0000 UTC,LastTransitionTime:2022-06-03 20:03:30 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 23:19:15 +0000 UTC,LastTransitionTime:2022-06-03 19:57:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 23:19:15 +0000 UTC,LastTransitionTime:2022-06-03 19:57:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 23:19:15 +0000 UTC,LastTransitionTime:2022-06-03 19:57:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 23:19:15 +0000 UTC,LastTransitionTime:2022-06-03 20:00:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3d668405f73a457bb0bcb4df5f4edac8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:c08279e3-a5cb-4f4d-b9f0-f2cde655469f,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:eddd5e176ac5f79e2e8ba9a1b7023bbf7200edfa835da39de54a6bf3568f9668 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 3 23:19:20.607: INFO: Logging kubelet events for node master1 Jun 3 23:19:20.610: INFO: Logging pods the kubelet thinks is on node master1 Jun 3 23:19:20.637: INFO: dns-autoscaler-7df78bfcfb-vdtpl started at 2022-06-03 20:01:09 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:20.637: INFO: Container autoscaler ready: true, restart count 2 Jun 3 23:19:20.637: INFO: coredns-8474476ff8-rvc4v started at 2022-06-03 20:01:12 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:20.637: INFO: Container coredns ready: true, restart count 1 Jun 3 23:19:20.637: INFO: container-registry-65d7c44b96-2nzvn started at 2022-06-03 20:05:02 +0000 UTC (0+2 container statuses recorded) Jun 3 23:19:20.637: INFO: Container docker-registry ready: true, restart count 0 Jun 3 23:19:20.637: INFO: Container nginx ready: true, restart count 0 Jun 3 23:19:20.637: INFO: kube-scheduler-master1 started at 2022-06-03 20:06:52 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:20.637: INFO: Container kube-scheduler ready: true, restart count 0 Jun 3 23:19:20.637: INFO: kube-proxy-zgchh started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:20.637: INFO: Container kube-proxy ready: true, restart count 2 Jun 3 23:19:20.637: INFO: kube-controller-manager-master1 started at 2022-06-03 19:58:57 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:20.637: INFO: Container kube-controller-manager ready: true, restart count 1 Jun 3 23:19:20.637: INFO: kube-flannel-m8sj7 started at 2022-06-03 20:00:31 +0000 UTC (1+1 container statuses recorded) Jun 3 23:19:20.637: INFO: Init container install-cni ready: true, restart count 0 Jun 3 23:19:20.637: INFO: Container kube-flannel ready: true, restart count 1 Jun 3 23:19:20.637: INFO: kube-multus-ds-amd64-n58qk started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:20.637: INFO: Container kube-multus ready: true, restart count 1 Jun 3 23:19:20.637: INFO: node-exporter-45rhg started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded) Jun 3 23:19:20.637: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 23:19:20.637: INFO: Container node-exporter ready: true, restart count 0 Jun 3 23:19:20.637: INFO: kube-apiserver-master1 started at 2022-06-03 19:58:57 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:20.637: INFO: Container kube-apiserver ready: true, restart count 0 Jun 3 23:19:20.722: INFO: Latency metrics for node master1 Jun 3 23:19:20.722: INFO: Logging node info for node master2 Jun 3 23:19:20.725: INFO: Node Info: &Node{ObjectMeta:{master2 a6ae2f0e-af0f-4dbb-a8e5-6d3a309310bc 76214 0 2022-06-03 19:58:21 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-03 19:58:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-03 20:00:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-03 20:10:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:28 +0000 UTC,LastTransitionTime:2022-06-03 20:03:28 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 23:19:14 +0000 UTC,LastTransitionTime:2022-06-03 19:58:21 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 23:19:14 +0000 UTC,LastTransitionTime:2022-06-03 19:58:21 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 23:19:14 +0000 UTC,LastTransitionTime:2022-06-03 19:58:21 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 23:19:14 +0000 UTC,LastTransitionTime:2022-06-03 20:00:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:21e5c20b6e4a4d3fb07443d5575db572,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:52401484-5222-49a3-a465-e7215ade9b1e,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 3 23:19:20.726: INFO: Logging kubelet events for node master2 Jun 3 23:19:20.728: INFO: Logging pods the kubelet thinks is on node master2 Jun 3 23:19:20.737: INFO: kube-scheduler-master2 started at 2022-06-03 19:58:55 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:20.737: INFO: Container kube-scheduler ready: true, restart count 3 Jun 3 23:19:20.737: INFO: kube-flannel-sbdcv started at 2022-06-03 20:00:32 +0000 UTC (1+1 container statuses recorded) Jun 3 23:19:20.737: INFO: Init container install-cni ready: true, restart count 2 Jun 3 23:19:20.737: INFO: Container kube-flannel ready: true, restart count 1 Jun 3 23:19:20.737: INFO: kube-multus-ds-amd64-ccvdq started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:20.737: INFO: Container kube-multus ready: true, restart count 1 Jun 3 23:19:20.737: INFO: kube-apiserver-master2 started at 2022-06-03 19:58:55 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:20.737: INFO: Container kube-apiserver ready: true, restart count 0 Jun 3 23:19:20.737: INFO: kube-controller-manager-master2 started at 2022-06-03 19:58:55 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:20.737: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 3 23:19:20.737: INFO: kube-proxy-nlc58 started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:20.737: INFO: Container kube-proxy ready: true, restart count 1 Jun 3 23:19:20.737: INFO: prometheus-operator-585ccfb458-xp2lz started at 2022-06-03 20:13:21 +0000 UTC (0+2 container statuses recorded) Jun 3 23:19:20.737: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 23:19:20.737: INFO: Container prometheus-operator ready: true, restart count 0 Jun 3 23:19:20.737: INFO: node-exporter-2h6sb started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded) Jun 3 23:19:20.737: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 23:19:20.737: INFO: Container node-exporter ready: true, restart count 0 Jun 3 23:19:20.819: INFO: Latency metrics for node master2 Jun 3 23:19:20.819: INFO: Logging node info for node master3 Jun 3 23:19:20.821: INFO: Node Info: &Node{ObjectMeta:{master3 559b19e7-45b0-4589-9993-9bba259aae96 76231 0 2022-06-03 19:58:27 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-03 19:58:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-03 20:00:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2022-06-03 20:08:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2022-06-03 20:08:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:22 +0000 UTC,LastTransitionTime:2022-06-03 20:03:22 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 23:19:15 +0000 UTC,LastTransitionTime:2022-06-03 19:58:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 23:19:15 +0000 UTC,LastTransitionTime:2022-06-03 19:58:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 23:19:15 +0000 UTC,LastTransitionTime:2022-06-03 19:58:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 23:19:15 +0000 UTC,LastTransitionTime:2022-06-03 20:03:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5b399eed918a40dd8324debc1c0777a3,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:2fde35f0-2dc9-4531-9d2b-0bd4a6516b3a,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 3 23:19:20.822: INFO: Logging kubelet events for node master3 Jun 3 23:19:20.824: INFO: Logging pods the kubelet thinks is on node master3 Jun 3 23:19:20.838: INFO: kube-apiserver-master3 started at 2022-06-03 20:03:18 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:20.838: INFO: Container kube-apiserver ready: true, restart count 0 Jun 3 23:19:20.838: INFO: kube-flannel-nx64t started at 2022-06-03 20:00:32 +0000 UTC (1+1 container statuses recorded) Jun 3 23:19:20.838: INFO: Init container install-cni ready: true, restart count 2 Jun 3 23:19:20.838: INFO: Container kube-flannel ready: true, restart count 2 Jun 3 23:19:20.838: INFO: kube-multus-ds-amd64-gjv49 started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:20.838: INFO: Container kube-multus ready: true, restart count 1 Jun 3 23:19:20.838: INFO: node-feature-discovery-controller-cff799f9f-8fbbp started at 2022-06-03 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:20.838: INFO: Container nfd-controller ready: true, restart count 0 Jun 3 23:19:20.838: INFO: kube-controller-manager-master3 started at 2022-06-03 20:03:18 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:20.838: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 3 23:19:20.838: INFO: kube-scheduler-master3 started at 2022-06-03 19:58:27 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:20.838: INFO: Container kube-scheduler ready: true, restart count 3 Jun 3 23:19:20.838: INFO: kube-proxy-m8r9n started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:20.838: INFO: Container kube-proxy ready: true, restart count 2 Jun 3 23:19:20.838: INFO: coredns-8474476ff8-dvwn7 started at 2022-06-03 20:01:07 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:20.838: INFO: Container coredns ready: true, restart count 1 Jun 3 23:19:20.838: INFO: node-exporter-jn8vv started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded) Jun 3 23:19:20.838: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 23:19:20.838: INFO: Container node-exporter ready: true, restart count 0 Jun 3 23:19:20.939: INFO: Latency metrics for node master3 Jun 3 23:19:20.939: INFO: Logging node info for node node1 Jun 3 23:19:20.941: INFO: Node Info: &Node{ObjectMeta:{node1 482ecf0f-7f88-436d-a313-227096fe8b8d 76305 0 2022-06-03 19:59:31 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-03 19:59:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-06-03 19:59:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-03 20:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-03 20:11:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-03 22:19:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2022-06-03 23:19:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:39 +0000 UTC,LastTransitionTime:2022-06-03 20:03:39 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 23:19:18 +0000 UTC,LastTransitionTime:2022-06-03 19:59:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 23:19:18 +0000 UTC,LastTransitionTime:2022-06-03 19:59:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 23:19:18 +0000 UTC,LastTransitionTime:2022-06-03 19:59:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 23:19:18 +0000 UTC,LastTransitionTime:2022-06-03 20:00:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d7b1fa7572024d5cac9eec5f4f2a75d3,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:a1aa46cd-ec2c-417b-ae44-b808bdc04113,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003977815,},ContainerImage{Names:[localhost:30500/cmk@sha256:196eade72a7e16bdb2d709d29fdec354c8a3dbbb68e384608929b41c5ec41520 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:160595fccf5ad4e41cc0a7acf56027802bf1a2310e704f6505baf0f88746e277 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.7],SizeBytes:60182103,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:bec5a478455b8244d18398355b5ec18540557180ddc029404300ca241638521b nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 3 23:19:20.942: INFO: Logging kubelet events for node node1 Jun 3 23:19:20.945: INFO: Logging pods the kubelet thinks is on node node1 Jun 3 23:19:20.962: INFO: cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7-2p524 started at 2022-06-03 23:19:08 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:20.962: INFO: Container cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7 ready: true, restart count 0 Jun 3 23:19:20.962: INFO: kube-flannel-hm6bh started at 2022-06-03 20:00:32 +0000 UTC (1+1 container statuses recorded) Jun 3 23:19:20.962: INFO: Init container install-cni ready: true, restart count 2 Jun 3 23:19:20.962: INFO: Container kube-flannel ready: true, restart count 3 Jun 3 23:19:20.962: INFO: cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7-f9gmh started at 2022-06-03 23:19:08 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:20.962: INFO: Container cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7 ready: false, restart count 0 Jun 3 23:19:20.962: INFO: cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7-z28pn started at 2022-06-03 23:19:08 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:20.962: INFO: Container cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7 ready: true, restart count 0 Jun 3 23:19:20.962: INFO: nginx-proxy-node1 started at 2022-06-03 19:59:31 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:20.962: INFO: Container nginx-proxy ready: true, restart count 2 Jun 3 23:19:20.962: INFO: cmk-init-discover-node1-n75dv started at 2022-06-03 20:11:42 +0000 UTC (0+3 container statuses recorded) Jun 3 23:19:20.962: INFO: Container discover ready: false, restart count 0 Jun 3 23:19:20.962: INFO: Container init ready: false, restart count 0 Jun 3 23:19:20.962: INFO: Container install ready: false, restart count 0 Jun 3 23:19:20.962: INFO: cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7-fblwd started at 2022-06-03 23:19:08 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:20.962: INFO: Container cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7 ready: true, restart count 0 Jun 3 23:19:20.962: INFO: cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7-4jh46 started at 2022-06-03 23:19:08 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:20.962: INFO: Container cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7 ready: true, restart count 0 Jun 3 23:19:20.962: INFO: secret-test-pod started at 2022-06-03 23:19:08 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:20.962: INFO: Container test-container ready: false, restart count 0 Jun 3 23:19:20.962: INFO: node-feature-discovery-worker-rg6tx started at 2022-06-03 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:20.962: INFO: Container nfd-worker ready: true, restart count 0 Jun 3 23:19:20.962: INFO: node-exporter-f5xkq started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded) Jun 3 23:19:20.962: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 23:19:20.962: INFO: Container node-exporter ready: true, restart count 0 Jun 3 23:19:20.962: INFO: cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7-llh2d started at 2022-06-03 23:19:08 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:20.962: INFO: Container cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7 ready: true, restart count 0 Jun 3 23:19:20.962: INFO: cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7-v4s5p started at 2022-06-03 23:19:08 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:20.962: INFO: Container cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7 ready: false, restart count 0 Jun 3 23:19:20.962: INFO: pod-always-succeed90c4b8fc-d649-4661-af9c-cb239ff8789b started at 2022-06-03 23:19:07 +0000 UTC (1+1 container statuses recorded) Jun 3 23:19:20.962: INFO: Init container foo ready: true, restart count 0 Jun 3 23:19:20.962: INFO: Container bar ready: false, restart count 0 Jun 3 23:19:20.962: INFO: cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7-72cw4 started at 2022-06-03 23:19:08 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:20.962: INFO: Container cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7 ready: true, restart count 0 Jun 3 23:19:20.963: INFO: kube-multus-ds-amd64-p7r6j started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:20.963: INFO: Container kube-multus ready: true, restart count 1 Jun 3 23:19:20.963: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qwqjx started at 2022-06-03 20:09:20 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:20.963: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 3 23:19:20.963: INFO: cmk-webhook-6c9d5f8578-c927x started at 2022-06-03 20:12:25 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:20.963: INFO: Container cmk-webhook ready: true, restart count 0 Jun 3 23:19:20.963: INFO: collectd-nbx5z started at 2022-06-03 20:17:32 +0000 UTC (0+3 container statuses recorded) Jun 3 23:19:20.963: INFO: Container collectd ready: true, restart count 0 Jun 3 23:19:20.963: INFO: Container collectd-exporter ready: true, restart count 0 Jun 3 23:19:20.963: INFO: Container rbac-proxy ready: true, restart count 0 Jun 3 23:19:20.963: INFO: kube-proxy-b6zlv started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:20.963: INFO: Container kube-proxy ready: true, restart count 2 Jun 3 23:19:20.963: INFO: prometheus-k8s-0 started at 2022-06-03 20:13:45 +0000 UTC (0+4 container statuses recorded) Jun 3 23:19:20.963: INFO: Container config-reloader ready: true, restart count 0 Jun 3 23:19:20.963: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 3 23:19:20.963: INFO: Container grafana ready: true, restart count 0 Jun 3 23:19:20.963: INFO: Container prometheus ready: true, restart count 1 Jun 3 23:19:20.963: INFO: busybox-privileged-true-3625411c-5dae-42a9-b574-da0e3dc1ce0c started at 2022-06-03 23:19:07 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:20.963: INFO: Container busybox-privileged-true-3625411c-5dae-42a9-b574-da0e3dc1ce0c ready: false, restart count 0 Jun 3 23:19:20.963: INFO: cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7-wswth started at 2022-06-03 23:19:08 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:20.963: INFO: Container cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7 ready: true, restart count 0 Jun 3 23:19:20.963: INFO: image-pull-testd20b0629-b3e8-4f16-ba77-f804deadfd01 started at 2022-06-03 23:19:19 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:20.963: INFO: Container image-pull-test ready: false, restart count 0 Jun 3 23:19:20.963: INFO: cmk-84nbw started at 2022-06-03 20:12:24 +0000 UTC (0+2 container statuses recorded) Jun 3 23:19:20.963: INFO: Container nodereport ready: true, restart count 0 Jun 3 23:19:20.963: INFO: Container reconcile ready: true, restart count 0 Jun 3 23:19:20.963: INFO: cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7-psnnt started at 2022-06-03 23:19:08 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:20.963: INFO: Container cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7 ready: false, restart count 0 Jun 3 23:19:27.800: INFO: Latency metrics for node node1 Jun 3 23:19:27.800: INFO: Logging node info for node node2 Jun 3 23:19:27.803: INFO: Node Info: &Node{ObjectMeta:{node2 bb95e261-57f4-4e78-b1f6-cbf8d9287d74 76533 0 2022-06-03 19:59:32 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-03 19:59:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-06-03 19:59:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-03 20:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-03 20:12:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-03 22:19:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2022-06-03 23:19:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}},"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:25 +0000 UTC,LastTransitionTime:2022-06-03 20:03:25 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 23:19:26 +0000 UTC,LastTransitionTime:2022-06-03 19:59:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 23:19:26 +0000 UTC,LastTransitionTime:2022-06-03 19:59:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 23:19:26 +0000 UTC,LastTransitionTime:2022-06-03 19:59:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 23:19:26 +0000 UTC,LastTransitionTime:2022-06-03 20:03:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:73f6f7c4482d4ddfadf38b35a5d03575,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:14b04379-324d-413e-8b7f-b1dff077c955,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:196eade72a7e16bdb2d709d29fdec354c8a3dbbb68e384608929b41c5ec41520 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:bec5a478455b8244d18398355b5ec18540557180ddc029404300ca241638521b localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:eddd5e176ac5f79e2e8ba9a1b7023bbf7200edfa835da39de54a6bf3568f9668 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 3 23:19:27.804: INFO: Logging kubelet events for node node2 Jun 3 23:19:27.806: INFO: Logging pods the kubelet thinks is on node node2 Jun 3 23:19:27.834: INFO: kubernetes-metrics-scraper-5558854cb-fz4kn started at 2022-06-03 20:01:12 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:27.834: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 3 23:19:27.834: INFO: cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7-z87rb started at 2022-06-03 23:19:07 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:27.834: INFO: Container cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7 ready: true, restart count 0 Jun 3 23:19:27.834: INFO: cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7-59rsx started at 2022-06-03 23:19:07 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:27.834: INFO: Container cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7 ready: true, restart count 0 Jun 3 23:19:27.834: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-49xzt started at 2022-06-03 20:09:20 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:27.834: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 3 23:19:27.834: INFO: cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7-2pmj8 started at 2022-06-03 23:19:08 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:27.834: INFO: Container cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7 ready: true, restart count 0 Jun 3 23:19:27.834: INFO: cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7-84r8m started at 2022-06-03 23:19:08 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:27.834: INFO: Container cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7 ready: true, restart count 0 Jun 3 23:19:27.834: INFO: cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7-qb8fp started at 2022-06-03 23:19:08 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:27.834: INFO: Container cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7 ready: true, restart count 0 Jun 3 23:19:27.834: INFO: nginx-proxy-node2 started at 2022-06-03 19:59:32 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:27.834: INFO: Container nginx-proxy ready: true, restart count 2 Jun 3 23:19:27.834: INFO: cmk-init-discover-node2-xvf8p started at 2022-06-03 20:12:02 +0000 UTC (0+3 container statuses recorded) Jun 3 23:19:27.834: INFO: Container discover ready: false, restart count 0 Jun 3 23:19:27.834: INFO: Container init ready: false, restart count 0 Jun 3 23:19:27.834: INFO: Container install ready: false, restart count 0 Jun 3 23:19:27.834: INFO: cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7-hf2fq started at 2022-06-03 23:19:08 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:27.834: INFO: Container cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7 ready: true, restart count 0 Jun 3 23:19:27.834: INFO: busybox-user-0-17f4e3a7-5c74-4284-a75e-be4b10d514e0 started at 2022-06-03 23:19:08 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:27.834: INFO: Container busybox-user-0-17f4e3a7-5c74-4284-a75e-be4b10d514e0 ready: false, restart count 0 Jun 3 23:19:27.834: INFO: tas-telemetry-aware-scheduling-84ff454dfb-j2kg5 started at 2022-06-03 20:16:39 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:27.834: INFO: Container tas-extender ready: true, restart count 0 Jun 3 23:19:27.834: INFO: cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7-bt7gc started at 2022-06-03 23:19:08 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:27.834: INFO: Container cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7 ready: true, restart count 0 Jun 3 23:19:27.834: INFO: cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7-zh9cm started at 2022-06-03 23:19:08 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:27.834: INFO: Container cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7 ready: true, restart count 0 Jun 3 23:19:27.834: INFO: cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7-j4kdf started at 2022-06-03 23:19:08 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:27.834: INFO: Container cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7 ready: true, restart count 0 Jun 3 23:19:27.834: INFO: kube-flannel-pc7wj started at 2022-06-03 20:00:32 +0000 UTC (1+1 container statuses recorded) Jun 3 23:19:27.834: INFO: Init container install-cni ready: true, restart count 0 Jun 3 23:19:27.834: INFO: Container kube-flannel ready: true, restart count 1 Jun 3 23:19:27.834: INFO: kube-multus-ds-amd64-n7spl started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:27.834: INFO: Container kube-multus ready: true, restart count 1 Jun 3 23:19:27.834: INFO: startup-c9ee9039-2d02-4d7e-8931-fe0ee5df70f5 started at 2022-06-03 23:19:07 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:27.834: INFO: Container busybox ready: false, restart count 0 Jun 3 23:19:27.834: INFO: cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7-rpm2j started at 2022-06-03 23:19:07 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:27.834: INFO: Container cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7 ready: true, restart count 0 Jun 3 23:19:27.834: INFO: kube-proxy-qmkcq started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:27.834: INFO: Container kube-proxy ready: true, restart count 1 Jun 3 23:19:27.834: INFO: node-feature-discovery-worker-gn855 started at 2022-06-03 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:27.834: INFO: Container nfd-worker ready: true, restart count 0 Jun 3 23:19:27.834: INFO: collectd-q2l4t started at 2022-06-03 20:17:32 +0000 UTC (0+3 container statuses recorded) Jun 3 23:19:27.834: INFO: Container collectd ready: true, restart count 0 Jun 3 23:19:27.834: INFO: Container collectd-exporter ready: true, restart count 0 Jun 3 23:19:27.834: INFO: Container rbac-proxy ready: true, restart count 0 Jun 3 23:19:27.834: INFO: security-context-8fc4e8cb-f39e-49a0-bb0b-5e8dee8552bb started at 2022-06-03 23:19:20 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:27.834: INFO: Container test-container ready: false, restart count 0 Jun 3 23:19:27.834: INFO: startup-73761899-72a1-4d1f-8e36-e85677500671 started at 2022-06-03 23:19:08 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:27.834: INFO: Container busybox ready: false, restart count 0 Jun 3 23:19:27.834: INFO: pod-ready started at 2022-06-03 23:19:09 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:27.834: INFO: Container pod-readiness-gate ready: false, restart count 0 Jun 3 23:19:27.834: INFO: kubernetes-dashboard-785dcbb76d-25c95 started at 2022-06-03 20:01:12 +0000 UTC (0+1 container statuses recorded) Jun 3 23:19:27.834: INFO: Container kubernetes-dashboard ready: true, restart count 1 Jun 3 23:19:27.834: INFO: cmk-v446x started at 2022-06-03 20:12:24 +0000 UTC (0+2 container statuses recorded) Jun 3 23:19:27.835: INFO: Container nodereport ready: true, restart count 0 Jun 3 23:19:27.835: INFO: Container reconcile ready: true, restart count 0 Jun 3 23:19:27.835: INFO: node-exporter-g45bm started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded) Jun 3 23:19:27.835: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 23:19:27.835: INFO: Container node-exporter ready: true, restart count 0 Jun 3 23:19:28.057: INFO: Latency metrics for node node2 Jun 3 23:19:28.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4501" for this suite. •! Panic [19.515 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be ready immediately after startupProbe succeeds [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:400 Test Panicked runtime error: invalid memory address or nil pointer dereference /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 Full Stack Trace k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x109 panic(0x654af00, 0x9c066c0) /usr/local/go/src/runtime/panic.go:965 +0x1b9 k8s.io/kubernetes/test/e2e/framework/pod.podContainerStarted.func1(0xc001332f0c, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/resource.go:334 +0x17b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc003ddc6c0, 0xc001332f00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitFor(0xc003e04090, 0xc003ddc6c0, 0xc003b7ea20, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:541 +0x128 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollInternal(0xc003e04090, 0xc003ddc6c0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:427 +0x87 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc003e04090, 0xc003ddc6c0, 0xc003e04090, 0xc003ddc6c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:452 +0x74 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc003ddc6c0, 0x14, 0xc003df83f0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodContainerStarted(0x77b33d8, 0xc003d25e40, 0xc003389ba8, 0x14, 0xc003df83f0, 0x2c, 0x0, 0x45d964b800, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/wait.go:554 +0x92 k8s.io/kubernetes/test/e2e/common/node.glob..func2.18() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:417 +0x39e k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0004fe780) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0004fe780) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0004fe780, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:19:08.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test W0603 23:19:08.601401 41 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 3 23:19:08.601: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 3 23:19:08.603: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 Jun 3 23:19:08.618: INFO: Waiting up to 5m0s for pod "busybox-user-0-17f4e3a7-5c74-4284-a75e-be4b10d514e0" in namespace "security-context-test-2859" to be "Succeeded or Failed" Jun 3 23:19:08.620: INFO: Pod "busybox-user-0-17f4e3a7-5c74-4284-a75e-be4b10d514e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.261464ms Jun 3 23:19:10.624: INFO: Pod "busybox-user-0-17f4e3a7-5c74-4284-a75e-be4b10d514e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005909581s Jun 3 23:19:12.628: INFO: Pod "busybox-user-0-17f4e3a7-5c74-4284-a75e-be4b10d514e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010170271s Jun 3 23:19:14.634: INFO: Pod "busybox-user-0-17f4e3a7-5c74-4284-a75e-be4b10d514e0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016109394s Jun 3 23:19:16.638: INFO: Pod "busybox-user-0-17f4e3a7-5c74-4284-a75e-be4b10d514e0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020526119s Jun 3 23:19:18.643: INFO: Pod "busybox-user-0-17f4e3a7-5c74-4284-a75e-be4b10d514e0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.02480124s Jun 3 23:19:20.651: INFO: Pod "busybox-user-0-17f4e3a7-5c74-4284-a75e-be4b10d514e0": Phase="Pending", Reason="", readiness=false. Elapsed: 12.032804606s Jun 3 23:19:22.653: INFO: Pod "busybox-user-0-17f4e3a7-5c74-4284-a75e-be4b10d514e0": Phase="Pending", Reason="", readiness=false. Elapsed: 14.035627431s Jun 3 23:19:24.659: INFO: Pod "busybox-user-0-17f4e3a7-5c74-4284-a75e-be4b10d514e0": Phase="Pending", Reason="", readiness=false. Elapsed: 16.041143373s Jun 3 23:19:26.664: INFO: Pod "busybox-user-0-17f4e3a7-5c74-4284-a75e-be4b10d514e0": Phase="Pending", Reason="", readiness=false. Elapsed: 18.046598162s Jun 3 23:19:28.673: INFO: Pod "busybox-user-0-17f4e3a7-5c74-4284-a75e-be4b10d514e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.054716788s Jun 3 23:19:28.673: INFO: Pod "busybox-user-0-17f4e3a7-5c74-4284-a75e-be4b10d514e0" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:19:28.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2859" for this suite. • [SLOW TEST:20.104 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsUser /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50 should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":462,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:19:19.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Jun 3 23:19:20.022: INFO: Waiting up to 5m0s for pod "security-context-8fc4e8cb-f39e-49a0-bb0b-5e8dee8552bb" in namespace "security-context-131" to be "Succeeded or Failed" Jun 3 23:19:20.025: INFO: Pod "security-context-8fc4e8cb-f39e-49a0-bb0b-5e8dee8552bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.586133ms Jun 3 23:19:22.058: INFO: Pod "security-context-8fc4e8cb-f39e-49a0-bb0b-5e8dee8552bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035221405s Jun 3 23:19:24.062: INFO: Pod "security-context-8fc4e8cb-f39e-49a0-bb0b-5e8dee8552bb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03966209s Jun 3 23:19:26.066: INFO: Pod "security-context-8fc4e8cb-f39e-49a0-bb0b-5e8dee8552bb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043300625s Jun 3 23:19:28.068: INFO: Pod "security-context-8fc4e8cb-f39e-49a0-bb0b-5e8dee8552bb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.046180529s Jun 3 23:19:30.073: INFO: Pod "security-context-8fc4e8cb-f39e-49a0-bb0b-5e8dee8552bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.050294222s STEP: Saw pod success Jun 3 23:19:30.073: INFO: Pod "security-context-8fc4e8cb-f39e-49a0-bb0b-5e8dee8552bb" satisfied condition "Succeeded or Failed" Jun 3 23:19:30.076: INFO: Trying to get logs from node node2 pod security-context-8fc4e8cb-f39e-49a0-bb0b-5e8dee8552bb container test-container: STEP: delete the pod Jun 3 23:19:30.088: INFO: Waiting for pod security-context-8fc4e8cb-f39e-49a0-bb0b-5e8dee8552bb to disappear Jun 3 23:19:30.090: INFO: Pod security-context-8fc4e8cb-f39e-49a0-bb0b-5e8dee8552bb no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:19:30.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-131" for this suite. • [SLOW TEST:10.107 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]","total":-1,"completed":2,"skipped":172,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:19:19.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:19:31.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-272" for this suite. • [SLOW TEST:12.097 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":2,"skipped":73,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:19:22.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 Jun 3 23:19:22.805: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-4697" to be "Succeeded or Failed" Jun 3 23:19:22.808: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.951734ms Jun 3 23:19:24.812: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007047656s Jun 3 23:19:26.817: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011419572s Jun 3 23:19:28.822: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016420367s Jun 3 23:19:30.824: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018847868s Jun 3 23:19:32.828: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.022920698s Jun 3 23:19:32.828: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:19:32.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4697" for this suite. • [SLOW TEST:10.119 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":2,"skipped":11,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:19:26.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:19:37.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2310" for this suite. • [SLOW TEST:11.139 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":2,"skipped":172,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:19:32.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 Jun 3 23:19:32.099: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-4608" to be "Succeeded or Failed" Jun 3 23:19:32.101: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 1.6124ms Jun 3 23:19:34.105: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006200653s Jun 3 23:19:36.109: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009426463s Jun 3 23:19:38.111: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012068888s Jun 3 23:19:40.115: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 8.015984519s Jun 3 23:19:42.123: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 10.023745914s Jun 3 23:19:44.127: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.028121951s Jun 3 23:19:44.127: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:19:44.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4608" for this suite. • [SLOW TEST:12.100 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":3,"skipped":164,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:19:33.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367 Jun 3 23:19:33.284: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-42bf91e5-ad70-4af1-94e3-ec72e1c3d305" in namespace "security-context-test-1496" to be "Succeeded or Failed" Jun 3 23:19:33.286: INFO: Pod "alpine-nnp-true-42bf91e5-ad70-4af1-94e3-ec72e1c3d305": Phase="Pending", Reason="", readiness=false. Elapsed: 2.228658ms Jun 3 23:19:35.290: INFO: Pod "alpine-nnp-true-42bf91e5-ad70-4af1-94e3-ec72e1c3d305": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006275431s Jun 3 23:19:37.294: INFO: Pod "alpine-nnp-true-42bf91e5-ad70-4af1-94e3-ec72e1c3d305": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010230774s Jun 3 23:19:39.298: INFO: Pod "alpine-nnp-true-42bf91e5-ad70-4af1-94e3-ec72e1c3d305": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014547702s Jun 3 23:19:41.302: INFO: Pod "alpine-nnp-true-42bf91e5-ad70-4af1-94e3-ec72e1c3d305": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017872422s Jun 3 23:19:43.305: INFO: Pod "alpine-nnp-true-42bf91e5-ad70-4af1-94e3-ec72e1c3d305": Phase="Pending", Reason="", readiness=false. Elapsed: 10.020882534s Jun 3 23:19:45.309: INFO: Pod "alpine-nnp-true-42bf91e5-ad70-4af1-94e3-ec72e1c3d305": Phase="Pending", Reason="", readiness=false. Elapsed: 12.025339122s Jun 3 23:19:47.313: INFO: Pod "alpine-nnp-true-42bf91e5-ad70-4af1-94e3-ec72e1c3d305": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.029056342s Jun 3 23:19:47.313: INFO: Pod "alpine-nnp-true-42bf91e5-ad70-4af1-94e3-ec72e1c3d305" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:19:47.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1496" for this suite. • [SLOW TEST:14.077 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":3,"skipped":194,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:19:30.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:19:50.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6782" for this suite. • [SLOW TEST:20.126 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388 ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:19:38.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Jun 3 23:19:38.420: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that prints his name and namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:134 STEP: creating the pod Jun 3 23:19:38.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-7219 create -f -' Jun 3 23:19:38.857: INFO: stderr: "" Jun 3 23:19:38.857: INFO: stdout: "pod/dapi-test-pod created\n" STEP: checking if name and namespace were passed correctly Jun 3 23:19:50.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-7219 logs dapi-test-pod test-container' Jun 3 23:19:51.034: INFO: stderr: "" Jun 3 23:19:51.034: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.233.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-7219\nMY_POD_IP=10.244.3.218\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.207\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" Jun 3 23:19:51.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-7219 logs dapi-test-pod test-container' Jun 3 23:19:51.221: INFO: stderr: "" Jun 3 23:19:51.221: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.233.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-7219\nMY_POD_IP=10.244.3.218\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.207\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:19:51.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-7219" for this suite. • [SLOW TEST:12.840 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:133 should create a pod that prints his name and namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:134 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Downward API should create a pod that prints his name and namespace","total":-1,"completed":3,"skipped":797,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:19:51.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 Jun 3 23:19:51.290: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:19:51.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-8912" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.035 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 can disable an AppArmor profile, using unconfined [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:47 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:284 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:19:07.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet W0603 23:19:07.934962 37 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 3 23:19:07.935: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 3 23:19:07.936: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:274 [BeforeEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:295 [It] kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 STEP: Creating a RC of 20 pods and wait until all pods of this RC are running STEP: creating replication controller cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7 in namespace kubelet-7783 I0603 23:19:07.971455 37 runners.go:190] Created replication controller with name: cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7, namespace: kubelet-7783, replica count: 20 I0603 23:19:18.022829 37 runners.go:190] cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7 Pods: 20 out of 20 created, 1 running, 19 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0603 23:19:28.023037 37 runners.go:190] cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7 Pods: 20 out of 20 created, 20 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 3 23:19:29.024: INFO: Checking pods on node node1 via /runningpods endpoint Jun 3 23:19:29.024: INFO: Checking pods on node node2 via /runningpods endpoint Jun 3 23:19:30.796: INFO: Resource usage on node "node2": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 1.645 4251.09 1356.89 "runtime" 0.888 1616.34 606.06 "kubelet" 0.888 1616.34 606.06 Resource usage on node "master1": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.400 4832.25 1662.51 "runtime" 0.140 697.46 319.01 "kubelet" 0.140 697.46 319.01 Resource usage on node "master2": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.479 3584.07 1551.70 "runtime" 0.108 592.61 253.67 "kubelet" 0.108 592.61 253.67 Resource usage on node "master3": container cpu(cores) memory_working_set(MB) memory_rss(MB) "runtime" 0.088 580.64 270.96 "kubelet" 0.088 580.64 270.96 "/" 0.562 3865.05 1749.81 Resource usage on node "node1": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 1.081 6241.57 2332.26 "runtime" 0.491 2446.87 536.97 "kubelet" 0.491 2446.87 536.97 STEP: Deleting the RC STEP: deleting ReplicationController cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7 in namespace kubelet-7783, will wait for the garbage collector to delete the pods Jun 3 23:19:30.857: INFO: Deleting ReplicationController cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7 took: 7.076249ms Jun 3 23:19:31.458: INFO: Terminating ReplicationController cleanup20-cde3536c-32a2-426b-a74a-6fde2ffad4a7 pods took: 601.173689ms Jun 3 23:19:51.760: INFO: Checking pods on node node2 via /runningpods endpoint Jun 3 23:19:51.760: INFO: Checking pods on node node1 via /runningpods endpoint Jun 3 23:19:51.778: INFO: Deleting 20 pods on 2 nodes completed in 1.018703581s after the RC was deleted Jun 3 23:19:51.778: INFO: CPU usage of containers on node "node1" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 1.081 2.104 2.104 2.104 2.104 "runtime" 0.000 0.000 0.491 0.813 0.813 0.813 0.813 "kubelet" 0.000 0.000 0.491 0.813 0.813 0.813 0.813 CPU usage of containers on node "node2" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 1.645 1.736 1.736 1.736 1.736 "runtime" 0.000 0.000 0.879 0.888 0.888 0.888 0.888 "kubelet" 0.000 0.000 0.879 0.888 0.888 0.888 0.888 CPU usage of containers on node "master1" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 0.400 0.400 0.417 0.417 0.417 "runtime" 0.000 0.000 0.123 0.123 0.123 0.123 0.123 "kubelet" 0.000 0.000 0.123 0.123 0.123 0.123 0.123 CPU usage of containers on node "master2" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 0.430 0.430 0.466 0.466 0.466 "runtime" 0.000 0.000 0.098 0.098 0.098 0.098 0.098 "kubelet" 0.000 0.000 0.098 0.098 0.098 0.098 0.098 CPU usage of containers on node "master3" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 0.480 0.480 0.562 0.562 0.562 "runtime" 0.000 0.000 0.088 0.097 0.097 0.097 0.097 "kubelet" 0.000 0.000 0.088 0.097 0.097 0.097 0.097 [AfterEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:326 STEP: removing the label kubelet_cleanup off the node node1 STEP: verifying the node doesn't have the label kubelet_cleanup STEP: removing the label kubelet_cleanup off the node node2 STEP: verifying the node doesn't have the label kubelet_cleanup [AfterEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:19:51.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-7783" for this suite. • [SLOW TEST:43.898 seconds] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:279 kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 ------------------------------ {"msg":"PASSED [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":1,"skipped":136,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:19:09.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777 STEP: submitting the pod to kubernetes STEP: patching pod status with condition "k8s.io/test-condition1" to true STEP: patching pod status with condition "k8s.io/test-condition2" to true Jun 3 23:19:42.384: INFO: Expect the Ready condition of pod "pod-ready" to be true, but got false Jun 3 23:19:43.384: INFO: Expect the Ready condition of pod "pod-ready" to be true, but got false Jun 3 23:19:44.384: INFO: Expect the Ready condition of pod "pod-ready" to be true, but got false Jun 3 23:19:45.384: INFO: Expect the Ready condition of pod "pod-ready" to be true, but got false Jun 3 23:19:46.384: INFO: Expect the Ready condition of pod "pod-ready" to be true, but got false STEP: patching pod status with condition "k8s.io/test-condition1" to false Jun 3 23:19:48.392: INFO: Expect the Ready condition of pod "pod-ready" to be false, but got true Jun 3 23:19:49.394: INFO: Expect the Ready condition of pod "pod-ready" to be false, but got true Jun 3 23:19:50.393: INFO: Expect the Ready condition of pod "pod-ready" to be false, but got true Jun 3 23:19:51.393: INFO: Expect the Ready condition of pod "pod-ready" to be false, but got true [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:19:52.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-164" for this suite. • [SLOW TEST:43.304 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777 ------------------------------ {"msg":"PASSED [sig-node] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":1,"skipped":759,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:19:47.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:19:53.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-791" for this suite. • [SLOW TEST:6.048 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":4,"skipped":327,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:19:44.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Jun 3 23:19:44.250: INFO: Waiting up to 5m0s for pod "security-context-59d2c23d-c829-4ca3-8182-83dc85773d62" in namespace "security-context-3218" to be "Succeeded or Failed" Jun 3 23:19:44.254: INFO: Pod "security-context-59d2c23d-c829-4ca3-8182-83dc85773d62": Phase="Pending", Reason="", readiness=false. Elapsed: 3.779198ms Jun 3 23:19:46.258: INFO: Pod "security-context-59d2c23d-c829-4ca3-8182-83dc85773d62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007610622s Jun 3 23:19:48.263: INFO: Pod "security-context-59d2c23d-c829-4ca3-8182-83dc85773d62": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012706357s Jun 3 23:19:50.266: INFO: Pod "security-context-59d2c23d-c829-4ca3-8182-83dc85773d62": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016014557s Jun 3 23:19:52.271: INFO: Pod "security-context-59d2c23d-c829-4ca3-8182-83dc85773d62": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020560844s Jun 3 23:19:54.276: INFO: Pod "security-context-59d2c23d-c829-4ca3-8182-83dc85773d62": Phase="Pending", Reason="", readiness=false. Elapsed: 10.025719639s Jun 3 23:19:56.280: INFO: Pod "security-context-59d2c23d-c829-4ca3-8182-83dc85773d62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.029619952s STEP: Saw pod success Jun 3 23:19:56.280: INFO: Pod "security-context-59d2c23d-c829-4ca3-8182-83dc85773d62" satisfied condition "Succeeded or Failed" Jun 3 23:19:56.282: INFO: Trying to get logs from node node1 pod security-context-59d2c23d-c829-4ca3-8182-83dc85773d62 container test-container: STEP: delete the pod Jun 3 23:19:56.510: INFO: Waiting for pod security-context-59d2c23d-c829-4ca3-8182-83dc85773d62 to disappear Jun 3 23:19:56.512: INFO: Pod security-context-59d2c23d-c829-4ca3-8182-83dc85773d62 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:19:56.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-3218" for this suite. • [SLOW TEST:12.302 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":4,"skipped":198,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:19:53.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups Jun 3 23:19:53.756: INFO: Waiting up to 5m0s for pod "security-context-e1bae1e2-5527-4df3-a2c8-2b3a282028f9" in namespace "security-context-9839" to be "Succeeded or Failed" Jun 3 23:19:53.759: INFO: Pod "security-context-e1bae1e2-5527-4df3-a2c8-2b3a282028f9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.780452ms Jun 3 23:19:55.763: INFO: Pod "security-context-e1bae1e2-5527-4df3-a2c8-2b3a282028f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007634486s Jun 3 23:19:57.766: INFO: Pod "security-context-e1bae1e2-5527-4df3-a2c8-2b3a282028f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010320836s STEP: Saw pod success Jun 3 23:19:57.766: INFO: Pod "security-context-e1bae1e2-5527-4df3-a2c8-2b3a282028f9" satisfied condition "Succeeded or Failed" Jun 3 23:19:57.768: INFO: Trying to get logs from node node2 pod security-context-e1bae1e2-5527-4df3-a2c8-2b3a282028f9 container test-container: STEP: delete the pod Jun 3 23:19:57.779: INFO: Waiting for pod security-context-e1bae1e2-5527-4df3-a2c8-2b3a282028f9 to disappear Jun 3 23:19:57.780: INFO: Pod security-context-e1bae1e2-5527-4df3-a2c8-2b3a282028f9 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:19:57.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-9839" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":5,"skipped":365,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:19:56.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 STEP: create image pull secret STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:20:01.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4232" for this suite. • [SLOW TEST:5.092 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":5,"skipped":243,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:20:01.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should update ConfigMap successfully /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/configmap.go:140 STEP: Creating ConfigMap configmap-4663/configmap-test-0c5a8401-fee1-4568-b6a6-3536f8b83158 STEP: Updating configMap configmap-4663/configmap-test-0c5a8401-fee1-4568-b6a6-3536f8b83158 STEP: Verifying update of ConfigMap configmap-4663/configmap-test-0c5a8401-fee1-4568-b6a6-3536f8b83158 [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:20:01.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4663" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":6,"skipped":267,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:19:51.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Jun 3 23:19:51.852: INFO: Waiting up to 5m0s for pod "security-context-e0e29e15-a378-49e3-8bf3-deaf0f0d18b9" in namespace "security-context-5217" to be "Succeeded or Failed" Jun 3 23:19:51.858: INFO: Pod "security-context-e0e29e15-a378-49e3-8bf3-deaf0f0d18b9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099803ms Jun 3 23:19:53.861: INFO: Pod "security-context-e0e29e15-a378-49e3-8bf3-deaf0f0d18b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009394119s Jun 3 23:19:55.865: INFO: Pod "security-context-e0e29e15-a378-49e3-8bf3-deaf0f0d18b9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013662993s Jun 3 23:19:57.869: INFO: Pod "security-context-e0e29e15-a378-49e3-8bf3-deaf0f0d18b9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017573908s Jun 3 23:19:59.875: INFO: Pod "security-context-e0e29e15-a378-49e3-8bf3-deaf0f0d18b9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.022783048s Jun 3 23:20:01.879: INFO: Pod "security-context-e0e29e15-a378-49e3-8bf3-deaf0f0d18b9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.02669832s Jun 3 23:20:03.883: INFO: Pod "security-context-e0e29e15-a378-49e3-8bf3-deaf0f0d18b9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.030816038s Jun 3 23:20:05.887: INFO: Pod "security-context-e0e29e15-a378-49e3-8bf3-deaf0f0d18b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.035054977s STEP: Saw pod success Jun 3 23:20:05.887: INFO: Pod "security-context-e0e29e15-a378-49e3-8bf3-deaf0f0d18b9" satisfied condition "Succeeded or Failed" Jun 3 23:20:05.889: INFO: Trying to get logs from node node1 pod security-context-e0e29e15-a378-49e3-8bf3-deaf0f0d18b9 container test-container: STEP: delete the pod Jun 3 23:20:05.903: INFO: Waiting for pod security-context-e0e29e15-a378-49e3-8bf3-deaf0f0d18b9 to disappear Jun 3 23:20:05.905: INFO: Pod security-context-e0e29e15-a378-49e3-8bf3-deaf0f0d18b9 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:20:05.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-5217" for this suite. • [SLOW TEST:14.092 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]","total":-1,"completed":2,"skipped":139,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:20:06.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should reject a Pod requesting a RuntimeClass with conflicting node selector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:41 [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:20:06.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-2047" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector","total":-1,"completed":3,"skipped":240,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:20:06.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:159 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:20:08.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5907" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":4,"skipped":321,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:19:52.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should run a Pod requesting a RuntimeClass with scheduling without taints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:125 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a label on the found node. STEP: verifying the node has the label foo-42f71e28-e4e0-4999-bb8d-cb00db97fd36 bar STEP: verifying the node has the label fizz-22305c1f-1da3-4281-ba10-295fa59c2824 buzz STEP: Trying to create runtimeclass and pod STEP: removing the label fizz-22305c1f-1da3-4281-ba10-295fa59c2824 off the node node1 STEP: verifying the node doesn't have the label fizz-22305c1f-1da3-4281-ba10-295fa59c2824 STEP: removing the label foo-42f71e28-e4e0-4999-bb8d-cb00db97fd36 off the node node1 STEP: verifying the node doesn't have the label foo-42f71e28-e4e0-4999-bb8d-cb00db97fd36 [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:20:12.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-8006" for this suite. • [SLOW TEST:20.129 seconds] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should run a Pod requesting a RuntimeClass with scheduling without taints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:125 ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling without taints ","total":-1,"completed":2,"skipped":818,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:20:12.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 Jun 3 23:20:12.711: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-99098605-9377-48ac-a4d9-8ebd5ce30f3a" in namespace "security-context-test-397" to be "Succeeded or Failed" Jun 3 23:20:12.713: INFO: Pod "alpine-nnp-nil-99098605-9377-48ac-a4d9-8ebd5ce30f3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.504505ms Jun 3 23:20:14.718: INFO: Pod "alpine-nnp-nil-99098605-9377-48ac-a4d9-8ebd5ce30f3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007716905s Jun 3 23:20:16.724: INFO: Pod "alpine-nnp-nil-99098605-9377-48ac-a4d9-8ebd5ce30f3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013252895s Jun 3 23:20:16.724: INFO: Pod "alpine-nnp-nil-99098605-9377-48ac-a4d9-8ebd5ce30f3a" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:20:16.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-397" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":3,"skipped":820,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:20:16.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-privileged-pod STEP: Waiting for a default service account to be provisioned in namespace [It] should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49 STEP: Creating a pod with a privileged container Jun 3 23:20:16.906: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Jun 3 23:20:18.911: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Jun 3 23:20:20.911: INFO: The status of Pod privileged-pod is Running (Ready = true) STEP: Executing in the privileged container Jun 3 23:20:20.914: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-8978 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:20.914: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:21.015: INFO: ExecWithOptions {Command:[ip link del dummy1] Namespace:e2e-privileged-pod-8978 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:21.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Executing in the non-privileged container Jun 3 23:20:21.118: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-8978 PodName:privileged-pod ContainerName:not-privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:21.118: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:20:21.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-privileged-pod-8978" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":4,"skipped":882,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:19:07.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W0603 23:19:07.756020 39 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 3 23:19:07.756: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 3 23:19:07.758: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:313 STEP: Creating pod startup-c9ee9039-2d02-4d7e-8931-fe0ee5df70f5 in namespace container-probe-6002 Jun 3 23:19:23.787: INFO: Started pod startup-c9ee9039-2d02-4d7e-8931-fe0ee5df70f5 in namespace container-probe-6002 STEP: checking the pod's current state and verifying that restartCount is present Jun 3 23:19:23.790: INFO: Initial restart count of pod startup-c9ee9039-2d02-4d7e-8931-fe0ee5df70f5 is 0 Jun 3 23:20:29.949: INFO: Restart count of pod container-probe-6002/startup-c9ee9039-2d02-4d7e-8931-fe0ee5df70f5 is now 1 (1m6.15957707s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:20:29.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6002" for this suite. • [SLOW TEST:82.232 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:313 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted startup probe fails","total":-1,"completed":1,"skipped":67,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:20:30.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should create and update a lease in the kube-node-lease namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:50 STEP: check that lease for this Kubelet exists in the kube-node-lease namespace STEP: check that node lease is updated at least once within the lease duration [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:20:30.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-4598" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":2,"skipped":105,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:20:08.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: waiting for pod running STEP: deleting the pod gracefully STEP: verifying the pod is running while in the graceful period termination Jun 3 23:20:34.510: INFO: pod is running [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:20:34.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-2069" for this suite. • [SLOW TEST:26.088 seconds] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 ------------------------------ {"msg":"PASSED [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process","total":-1,"completed":5,"skipped":344,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:20:34.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:449 STEP: Creating pod liveness-override-3b330666-bdb4-4ced-a2fd-8f2d243a9390 in namespace container-probe-5503 Jun 3 23:20:38.582: INFO: Started pod liveness-override-3b330666-bdb4-4ced-a2fd-8f2d243a9390 in namespace container-probe-5503 STEP: checking the pod's current state and verifying that restartCount is present Jun 3 23:20:38.584: INFO: Initial restart count of pod liveness-override-3b330666-bdb4-4ced-a2fd-8f2d243a9390 is 0 Jun 3 23:20:40.592: INFO: Restart count of pod container-probe-5503/liveness-override-3b330666-bdb4-4ced-a2fd-8f2d243a9390 is now 1 (2.007237121s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:20:40.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5503" for this suite. • [SLOW TEST:6.071 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:449 ------------------------------ {"msg":"PASSED [sig-node] Probing container should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod]","total":-1,"completed":6,"skipped":352,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:20:41.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Jun 3 23:20:41.100: INFO: Waiting up to 5m0s for pod "security-context-f08b184a-5ef8-44dc-92c1-a4926c588c7c" in namespace "security-context-5816" to be "Succeeded or Failed" Jun 3 23:20:41.105: INFO: Pod "security-context-f08b184a-5ef8-44dc-92c1-a4926c588c7c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.827123ms Jun 3 23:20:43.110: INFO: Pod "security-context-f08b184a-5ef8-44dc-92c1-a4926c588c7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009431455s Jun 3 23:20:45.114: INFO: Pod "security-context-f08b184a-5ef8-44dc-92c1-a4926c588c7c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013261913s Jun 3 23:20:47.118: INFO: Pod "security-context-f08b184a-5ef8-44dc-92c1-a4926c588c7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017431726s STEP: Saw pod success Jun 3 23:20:47.118: INFO: Pod "security-context-f08b184a-5ef8-44dc-92c1-a4926c588c7c" satisfied condition "Succeeded or Failed" Jun 3 23:20:47.120: INFO: Trying to get logs from node node2 pod security-context-f08b184a-5ef8-44dc-92c1-a4926c588c7c container test-container: STEP: delete the pod Jun 3 23:20:47.131: INFO: Waiting for pod security-context-f08b184a-5ef8-44dc-92c1-a4926c588c7c to disappear Jun 3 23:20:47.133: INFO: Pod security-context-f08b184a-5ef8-44dc-92c1-a4926c588c7c no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:20:47.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-5816" for this suite. • [SLOW TEST:6.080 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":7,"skipped":592,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:20:47.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ssh STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:36 Jun 3 23:20:47.242: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:20:47.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ssh-8189" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.035 seconds] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should SSH to all nodes and run commands [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:42 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:20:21.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename mount-propagation STEP: Waiting for a default service account to be provisioned in namespace [It] should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 Jun 3 23:20:21.920: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Jun 3 23:20:23.925: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Jun 3 23:20:25.924: INFO: The status of Pod master is Running (Ready = true) Jun 3 23:20:25.941: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jun 3 23:20:27.946: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jun 3 23:20:29.948: INFO: The status of Pod slave is Running (Ready = true) Jun 3 23:20:29.965: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Jun 3 23:20:31.971: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Jun 3 23:20:33.969: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Jun 3 23:20:35.968: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Jun 3 23:20:37.969: INFO: The status of Pod private is Running (Ready = true) Jun 3 23:20:37.986: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Jun 3 23:20:39.990: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Jun 3 23:20:41.991: INFO: The status of Pod default is Running (Ready = true) Jun 3 23:20:41.996: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-1984 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:41.996: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:42.115: INFO: Exec stderr: "" Jun 3 23:20:42.118: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-1984 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:42.118: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:42.198: INFO: Exec stderr: "" Jun 3 23:20:42.201: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-1984 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:42.201: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:42.285: INFO: Exec stderr: "" Jun 3 23:20:42.288: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-1984 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:42.288: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:42.372: INFO: Exec stderr: "" Jun 3 23:20:42.375: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-1984 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:42.375: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:42.483: INFO: Exec stderr: "" Jun 3 23:20:42.485: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-1984 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:42.485: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:42.572: INFO: Exec stderr: "" Jun 3 23:20:42.575: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-1984 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:42.575: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:42.672: INFO: Exec stderr: "" Jun 3 23:20:42.675: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-1984 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:42.675: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:42.759: INFO: Exec stderr: "" Jun 3 23:20:42.761: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-1984 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:42.761: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:42.842: INFO: Exec stderr: "" Jun 3 23:20:42.846: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-1984 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:42.846: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:42.924: INFO: Exec stderr: "" Jun 3 23:20:42.927: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-1984 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:42.927: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:43.006: INFO: Exec stderr: "" Jun 3 23:20:43.009: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-1984 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:43.009: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:43.089: INFO: Exec stderr: "" Jun 3 23:20:43.091: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-1984 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:43.091: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:43.178: INFO: Exec stderr: "" Jun 3 23:20:43.181: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-1984 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:43.181: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:43.272: INFO: Exec stderr: "" Jun 3 23:20:43.275: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-1984 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:43.275: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:43.374: INFO: Exec stderr: "" Jun 3 23:20:43.377: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-1984 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:43.377: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:43.469: INFO: Exec stderr: "" Jun 3 23:20:43.472: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-master /mnt/test/master; echo master > /mnt/test/master/file] Namespace:mount-propagation-1984 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:43.472: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:43.551: INFO: Exec stderr: "" Jun 3 23:20:43.553: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-slave /mnt/test/slave; echo slave > /mnt/test/slave/file] Namespace:mount-propagation-1984 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:43.554: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:43.631: INFO: Exec stderr: "" Jun 3 23:20:43.634: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-private /mnt/test/private; echo private > /mnt/test/private/file] Namespace:mount-propagation-1984 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:43.634: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:43.714: INFO: Exec stderr: "" Jun 3 23:20:43.718: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-default /mnt/test/default; echo default > /mnt/test/default/file] Namespace:mount-propagation-1984 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:43.718: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:43.797: INFO: Exec stderr: "" Jun 3 23:20:45.812: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-1984"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-1984"/host; echo host > "/var/lib/kubelet/mount-propagation-1984"/host/file] Namespace:mount-propagation-1984 PodName:hostexec-node1-vqtgc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 3 23:20:45.812: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:45.905: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-1984 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:45.905: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:46.012: INFO: pod master mount master: stdout: "master", stderr: "" error: Jun 3 23:20:46.015: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-1984 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:46.015: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:46.095: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Jun 3 23:20:46.099: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-1984 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:46.099: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:46.207: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Jun 3 23:20:46.211: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-1984 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:46.211: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:46.288: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Jun 3 23:20:46.290: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-1984 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:46.290: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:46.375: INFO: pod master mount host: stdout: "host", stderr: "" error: Jun 3 23:20:46.378: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-1984 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:46.378: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:46.456: INFO: pod slave mount master: stdout: "master", stderr: "" error: Jun 3 23:20:46.458: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-1984 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:46.458: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:46.566: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: Jun 3 23:20:46.569: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-1984 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:46.569: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:46.649: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Jun 3 23:20:46.652: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-1984 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:46.652: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:46.732: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Jun 3 23:20:46.734: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-1984 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:46.734: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:46.817: INFO: pod slave mount host: stdout: "host", stderr: "" error: Jun 3 23:20:46.820: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-1984 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:46.820: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:46.899: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 Jun 3 23:20:46.901: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-1984 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:46.901: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:46.987: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Jun 3 23:20:46.989: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-1984 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:46.989: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:47.081: INFO: pod private mount private: stdout: "private", stderr: "" error: Jun 3 23:20:47.084: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-1984 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:47.084: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:47.172: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Jun 3 23:20:47.175: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-1984 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:47.175: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:47.262: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 Jun 3 23:20:47.266: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-1984 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:47.266: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:47.351: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 Jun 3 23:20:47.354: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-1984 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:47.354: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:47.432: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Jun 3 23:20:47.435: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-1984 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:47.435: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:47.517: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Jun 3 23:20:47.519: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-1984 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:47.519: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:47.611: INFO: pod default mount default: stdout: "default", stderr: "" error: Jun 3 23:20:47.613: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-1984 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:47.613: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:47.693: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 Jun 3 23:20:47.693: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test `cat "/var/lib/kubelet/mount-propagation-1984"/master/file` = master] Namespace:mount-propagation-1984 PodName:hostexec-node1-vqtgc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 3 23:20:47.693: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:47.800: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -e "/var/lib/kubelet/mount-propagation-1984"/slave/file] Namespace:mount-propagation-1984 PodName:hostexec-node1-vqtgc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 3 23:20:47.800: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:47.883: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/var/lib/kubelet/mount-propagation-1984"/host] Namespace:mount-propagation-1984 PodName:hostexec-node1-vqtgc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 3 23:20:47.883: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:47.987: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/default] Namespace:mount-propagation-1984 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:47.988: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:48.077: INFO: Exec stderr: "" Jun 3 23:20:48.080: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/private] Namespace:mount-propagation-1984 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:48.080: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:48.174: INFO: Exec stderr: "" Jun 3 23:20:48.177: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/slave] Namespace:mount-propagation-1984 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:48.177: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:48.279: INFO: Exec stderr: "" Jun 3 23:20:48.282: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/master] Namespace:mount-propagation-1984 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 23:20:48.282: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:20:48.371: INFO: Exec stderr: "" Jun 3 23:20:48.371: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -rf "/var/lib/kubelet/mount-propagation-1984"] Namespace:mount-propagation-1984 PodName:hostexec-node1-vqtgc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 3 23:20:48.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod hostexec-node1-vqtgc in namespace mount-propagation-1984 [AfterEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:20:48.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "mount-propagation-1984" for this suite. • [SLOW TEST:26.591 seconds] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 ------------------------------ {"msg":"PASSED [sig-node] Mount propagation should propagate mounts to the host","total":-1,"completed":5,"skipped":983,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:19:28.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Jun 3 23:19:28.433: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 Jun 3 23:19:28.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-3206 create -f -' Jun 3 23:19:28.908: INFO: stderr: "" Jun 3 23:19:28.908: INFO: stdout: "pod/liveness-exec created\n" Jun 3 23:19:28.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-3206 create -f -' Jun 3 23:19:29.258: INFO: stderr: "" Jun 3 23:19:29.258: INFO: stdout: "pod/liveness-http created\n" STEP: Check restarts Jun 3 23:19:51.266: INFO: Pod: liveness-http, restart count:0 Jun 3 23:19:51.266: INFO: Pod: liveness-exec, restart count:0 Jun 3 23:19:53.270: INFO: Pod: liveness-http, restart count:0 Jun 3 23:19:53.270: INFO: Pod: liveness-exec, restart count:0 Jun 3 23:19:55.276: INFO: Pod: liveness-exec, restart count:0 Jun 3 23:19:55.276: INFO: Pod: liveness-http, restart count:0 Jun 3 23:19:57.280: INFO: Pod: liveness-http, restart count:0 Jun 3 23:19:57.280: INFO: Pod: liveness-exec, restart count:0 Jun 3 23:19:59.285: INFO: Pod: liveness-exec, restart count:0 Jun 3 23:19:59.285: INFO: Pod: liveness-http, restart count:0 Jun 3 23:20:01.287: INFO: Pod: liveness-exec, restart count:0 Jun 3 23:20:01.287: INFO: Pod: liveness-http, restart count:0 Jun 3 23:20:03.293: INFO: Pod: liveness-exec, restart count:0 Jun 3 23:20:03.293: INFO: Pod: liveness-http, restart count:0 Jun 3 23:20:05.297: INFO: Pod: liveness-exec, restart count:0 Jun 3 23:20:05.298: INFO: Pod: liveness-http, restart count:0 Jun 3 23:20:07.303: INFO: Pod: liveness-http, restart count:0 Jun 3 23:20:07.303: INFO: Pod: liveness-exec, restart count:0 Jun 3 23:20:09.307: INFO: Pod: liveness-http, restart count:0 Jun 3 23:20:09.307: INFO: Pod: liveness-exec, restart count:0 Jun 3 23:20:11.312: INFO: Pod: liveness-http, restart count:0 Jun 3 23:20:11.312: INFO: Pod: liveness-exec, restart count:0 Jun 3 23:20:13.316: INFO: Pod: liveness-http, restart count:0 Jun 3 23:20:13.316: INFO: Pod: liveness-exec, restart count:0 Jun 3 23:20:15.323: INFO: Pod: liveness-exec, restart count:0 Jun 3 23:20:15.323: INFO: Pod: liveness-http, restart count:0 Jun 3 23:20:17.328: INFO: Pod: liveness-http, restart count:0 Jun 3 23:20:17.328: INFO: Pod: liveness-exec, restart count:0 Jun 3 23:20:19.334: INFO: Pod: liveness-http, restart count:0 Jun 3 23:20:19.334: INFO: Pod: liveness-exec, restart count:0 Jun 3 23:20:21.337: INFO: Pod: liveness-exec, restart count:0 Jun 3 23:20:21.337: INFO: Pod: liveness-http, restart count:1 Jun 3 23:20:21.337: INFO: Saw liveness-http restart, succeeded... Jun 3 23:20:23.340: INFO: Pod: liveness-exec, restart count:0 Jun 3 23:20:25.345: INFO: Pod: liveness-exec, restart count:0 Jun 3 23:20:27.349: INFO: Pod: liveness-exec, restart count:0 Jun 3 23:20:29.358: INFO: Pod: liveness-exec, restart count:0 Jun 3 23:20:31.361: INFO: Pod: liveness-exec, restart count:0 Jun 3 23:20:33.369: INFO: Pod: liveness-exec, restart count:0 Jun 3 23:20:35.373: INFO: Pod: liveness-exec, restart count:0 Jun 3 23:20:37.377: INFO: Pod: liveness-exec, restart count:0 Jun 3 23:20:39.382: INFO: Pod: liveness-exec, restart count:0 Jun 3 23:20:41.387: INFO: Pod: liveness-exec, restart count:0 Jun 3 23:20:43.392: INFO: Pod: liveness-exec, restart count:0 Jun 3 23:20:45.396: INFO: Pod: liveness-exec, restart count:0 Jun 3 23:20:47.400: INFO: Pod: liveness-exec, restart count:0 Jun 3 23:20:49.405: INFO: Pod: liveness-exec, restart count:0 Jun 3 23:20:51.409: INFO: Pod: liveness-exec, restart count:1 Jun 3 23:20:51.409: INFO: Saw liveness-exec restart, succeeded... [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:20:51.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-3206" for this suite. • [SLOW TEST:83.016 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Liveness /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:66 liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Liveness liveness pods should be automatically restarted","total":-1,"completed":2,"skipped":604,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:20:48.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:477 STEP: Creating pod startup-override-c9067792-4aac-4e98-b45b-0cf932c53000 in namespace container-probe-2744 Jun 3 23:20:52.662: INFO: Started pod startup-override-c9067792-4aac-4e98-b45b-0cf932c53000 in namespace container-probe-2744 STEP: checking the pod's current state and verifying that restartCount is present Jun 3 23:20:52.664: INFO: Initial restart count of pod startup-override-c9067792-4aac-4e98-b45b-0cf932c53000 is 0 Jun 3 23:20:54.672: INFO: Restart count of pod container-probe-2744/startup-override-c9067792-4aac-4e98-b45b-0cf932c53000 is now 1 (2.007825443s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:20:54.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2744" for this suite. • [SLOW TEST:6.065 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:477 ------------------------------ {"msg":"PASSED [sig-node] Probing container should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod]","total":-1,"completed":6,"skipped":1059,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:20:30.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274 STEP: Creating pod liveness-10055bc0-6b75-4093-b1e0-6e793678be2f in namespace container-probe-4065 Jun 3 23:20:36.343: INFO: Started pod liveness-10055bc0-6b75-4093-b1e0-6e793678be2f in namespace container-probe-4065 STEP: checking the pod's current state and verifying that restartCount is present Jun 3 23:20:36.346: INFO: Initial restart count of pod liveness-10055bc0-6b75-4093-b1e0-6e793678be2f is 0 Jun 3 23:20:56.401: INFO: Restart count of pod container-probe-4065/liveness-10055bc0-6b75-4093-b1e0-6e793678be2f is now 1 (20.05564481s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:20:56.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4065" for this suite. • [SLOW TEST:26.116 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":3,"skipped":217,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:20:02.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:217 STEP: Creating pod busybox-7c9c810e-32bd-4121-bf99-e0a4dc0804ee in namespace container-probe-1326 Jun 3 23:20:08.101: INFO: Started pod busybox-7c9c810e-32bd-4121-bf99-e0a4dc0804ee in namespace container-probe-1326 STEP: checking the pod's current state and verifying that restartCount is present Jun 3 23:20:08.104: INFO: Initial restart count of pod busybox-7c9c810e-32bd-4121-bf99-e0a4dc0804ee is 0 Jun 3 23:20:58.246: INFO: Restart count of pod container-probe-1326/busybox-7c9c810e-32bd-4121-bf99-e0a4dc0804ee is now 1 (50.141829561s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:20:58.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1326" for this suite. • [SLOW TEST:56.201 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:217 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":7,"skipped":409,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:20:59.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 Jun 3 23:20:59.187: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:20:59.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-8286" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 should enforce an AppArmor profile [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:43 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:284 ------------------------------ S ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:20:47.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:53 [It] should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jun 3 23:20:57.132: INFO: start=2022-06-03 23:20:51.49595519 +0000 UTC m=+105.831581008, now=2022-06-03 23:20:57.132096468 +0000 UTC m=+111.467722453, kubelet pod: {"metadata":{"name":"pod-submit-remove-eb89938e-e3e7-4a7b-a794-30ed77cc98bf","namespace":"pods-1052","uid":"7987b679-753e-4d56-83cf-472b25241023","resourceVersion":"78211","creationTimestamp":"2022-06-03T23:20:47Z","deletionTimestamp":"2022-06-03T23:21:21Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"464066636"},"annotations":{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.124\"\n ],\n \"mac\": \"d6:5a:f6:a5:d5:fb\",\n \"default\": true,\n \"dns\": {}\n}]","k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.124\"\n ],\n \"mac\": \"d6:5a:f6:a5:d5:fb\",\n \"default\": true,\n \"dns\": {}\n}]","kubernetes.io/config.seen":"2022-06-03T23:20:47.483210063Z","kubernetes.io/config.source":"api","kubernetes.io/psp":"collectd"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2022-06-03T23:20:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"kube-api-access-wnvqd","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","args":["pause"],"resources":{},"volumeMounts":[{"name":"kube-api-access-wnvqd","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"node2","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-06-03T23:20:47Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-06-03T23:20:50Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-06-03T23:20:50Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-06-03T23:20:47Z"}],"hostIP":"10.10.190.208","podIP":"10.244.4.124","podIPs":[{"ip":"10.244.4.124"}],"startTime":"2022-06-03T23:20:47Z","containerStatuses":[{"name":"agnhost-container","state":{"running":{"startedAt":"2022-06-03T23:20:49Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","imageID":"docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1","containerID":"docker://e3ae31eda36feaaccf5a388acd01a6ffb2214b75314940b2ec845e9552c9426b","started":true}],"qosClass":"BestEffort"}} Jun 3 23:21:01.601: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:21:01.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1052" for this suite. • [SLOW TEST:14.172 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:51 should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Delete Grace Period should be submitted and removed","total":-1,"completed":8,"skipped":731,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:19:51.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237 STEP: Creating pod busybox-c4390200-6cfe-445c-8545-e08b68289e31 in namespace container-probe-9818 Jun 3 23:20:05.356: INFO: Started pod busybox-c4390200-6cfe-445c-8545-e08b68289e31 in namespace container-probe-9818 Jun 3 23:20:05.356: INFO: pod container-probe-9818/busybox-c4390200-6cfe-445c-8545-e08b68289e31 is not ready (1.778µs elapsed) Jun 3 23:20:07.356: INFO: pod container-probe-9818/busybox-c4390200-6cfe-445c-8545-e08b68289e31 is not ready (2.000235999s elapsed) Jun 3 23:20:09.358: INFO: pod container-probe-9818/busybox-c4390200-6cfe-445c-8545-e08b68289e31 is not ready (4.001744788s elapsed) Jun 3 23:20:11.360: INFO: pod container-probe-9818/busybox-c4390200-6cfe-445c-8545-e08b68289e31 is not ready (6.003846599s elapsed) Jun 3 23:20:13.361: INFO: pod container-probe-9818/busybox-c4390200-6cfe-445c-8545-e08b68289e31 is not ready (8.004355954s elapsed) Jun 3 23:20:15.362: INFO: pod container-probe-9818/busybox-c4390200-6cfe-445c-8545-e08b68289e31 is not ready (10.005672191s elapsed) Jun 3 23:20:17.363: INFO: pod container-probe-9818/busybox-c4390200-6cfe-445c-8545-e08b68289e31 is not ready (12.006535812s elapsed) Jun 3 23:20:19.366: INFO: pod container-probe-9818/busybox-c4390200-6cfe-445c-8545-e08b68289e31 is not ready (14.010229586s elapsed) Jun 3 23:20:21.368: INFO: pod container-probe-9818/busybox-c4390200-6cfe-445c-8545-e08b68289e31 is not ready (16.011496576s elapsed) Jun 3 23:20:23.370: INFO: pod container-probe-9818/busybox-c4390200-6cfe-445c-8545-e08b68289e31 is not ready (18.014003514s elapsed) Jun 3 23:20:25.372: INFO: pod container-probe-9818/busybox-c4390200-6cfe-445c-8545-e08b68289e31 is not ready (20.015691665s elapsed) Jun 3 23:20:27.373: INFO: pod container-probe-9818/busybox-c4390200-6cfe-445c-8545-e08b68289e31 is not ready (22.016489083s elapsed) Jun 3 23:20:29.374: INFO: pod container-probe-9818/busybox-c4390200-6cfe-445c-8545-e08b68289e31 is not ready (24.018179086s elapsed) Jun 3 23:20:31.375: INFO: pod container-probe-9818/busybox-c4390200-6cfe-445c-8545-e08b68289e31 is not ready (26.018827434s elapsed) Jun 3 23:20:33.376: INFO: pod container-probe-9818/busybox-c4390200-6cfe-445c-8545-e08b68289e31 is not ready (28.020151067s elapsed) Jun 3 23:20:35.377: INFO: pod container-probe-9818/busybox-c4390200-6cfe-445c-8545-e08b68289e31 is not ready (30.021119523s elapsed) Jun 3 23:20:37.378: INFO: pod container-probe-9818/busybox-c4390200-6cfe-445c-8545-e08b68289e31 is not ready (32.021330728s elapsed) Jun 3 23:20:39.382: INFO: pod container-probe-9818/busybox-c4390200-6cfe-445c-8545-e08b68289e31 is not ready (34.025856984s elapsed) Jun 3 23:20:41.382: INFO: pod container-probe-9818/busybox-c4390200-6cfe-445c-8545-e08b68289e31 is not ready (36.026110391s elapsed) Jun 3 23:20:43.384: INFO: pod container-probe-9818/busybox-c4390200-6cfe-445c-8545-e08b68289e31 is not ready (38.027887151s elapsed) Jun 3 23:20:45.386: INFO: pod container-probe-9818/busybox-c4390200-6cfe-445c-8545-e08b68289e31 is not ready (40.029457033s elapsed) Jun 3 23:20:47.387: INFO: pod container-probe-9818/busybox-c4390200-6cfe-445c-8545-e08b68289e31 is not ready (42.03027782s elapsed) Jun 3 23:20:49.388: INFO: pod container-probe-9818/busybox-c4390200-6cfe-445c-8545-e08b68289e31 is not ready (44.031752035s elapsed) Jun 3 23:20:51.390: INFO: pod container-probe-9818/busybox-c4390200-6cfe-445c-8545-e08b68289e31 is not ready (46.033756291s elapsed) Jun 3 23:20:53.391: INFO: pod container-probe-9818/busybox-c4390200-6cfe-445c-8545-e08b68289e31 is not ready (48.035209166s elapsed) Jun 3 23:20:55.395: INFO: pod container-probe-9818/busybox-c4390200-6cfe-445c-8545-e08b68289e31 is not ready (50.038287907s elapsed) Jun 3 23:20:57.395: INFO: pod container-probe-9818/busybox-c4390200-6cfe-445c-8545-e08b68289e31 is not ready (52.038718636s elapsed) Jun 3 23:20:59.397: INFO: pod container-probe-9818/busybox-c4390200-6cfe-445c-8545-e08b68289e31 is not ready (54.040611222s elapsed) Jun 3 23:21:01.398: INFO: pod container-probe-9818/busybox-c4390200-6cfe-445c-8545-e08b68289e31 is not ready (56.041517295s elapsed) Jun 3 23:21:03.399: INFO: pod container-probe-9818/busybox-c4390200-6cfe-445c-8545-e08b68289e31 is not ready (58.04242278s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:21:05.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9818" for this suite. • [SLOW TEST:74.103 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237 ------------------------------ {"msg":"PASSED [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":4,"skipped":815,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:20:59.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 Jun 3 23:20:59.233: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-0fbc0075-3e5a-436b-b5e2-bd4bbea3885f" in namespace "security-context-test-5790" to be "Succeeded or Failed" Jun 3 23:20:59.239: INFO: Pod "busybox-readonly-true-0fbc0075-3e5a-436b-b5e2-bd4bbea3885f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.588092ms Jun 3 23:21:01.243: INFO: Pod "busybox-readonly-true-0fbc0075-3e5a-436b-b5e2-bd4bbea3885f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009663336s Jun 3 23:21:03.247: INFO: Pod "busybox-readonly-true-0fbc0075-3e5a-436b-b5e2-bd4bbea3885f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013126031s Jun 3 23:21:05.251: INFO: Pod "busybox-readonly-true-0fbc0075-3e5a-436b-b5e2-bd4bbea3885f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01733534s Jun 3 23:21:07.255: INFO: Pod "busybox-readonly-true-0fbc0075-3e5a-436b-b5e2-bd4bbea3885f": Phase="Failed", Reason="", readiness=false. Elapsed: 8.021459192s Jun 3 23:21:07.255: INFO: Pod "busybox-readonly-true-0fbc0075-3e5a-436b-b5e2-bd4bbea3885f" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:21:07.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5790" for this suite. • [SLOW TEST:8.063 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with readOnlyRootFilesystem /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171 should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":8,"skipped":875,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:21:05.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 3 23:21:08.494: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:21:08.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4959" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":5,"skipped":827,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Jun 3 23:21:08.564: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:21:01.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Jun 3 23:21:01.803: INFO: Waiting up to 5m0s for pod "security-context-371f50fc-de13-4958-a316-f6bc87f95c4b" in namespace "security-context-8478" to be "Succeeded or Failed" Jun 3 23:21:01.805: INFO: Pod "security-context-371f50fc-de13-4958-a316-f6bc87f95c4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.572972ms Jun 3 23:21:03.810: INFO: Pod "security-context-371f50fc-de13-4958-a316-f6bc87f95c4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006590796s Jun 3 23:21:05.813: INFO: Pod "security-context-371f50fc-de13-4958-a316-f6bc87f95c4b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010084475s Jun 3 23:21:07.816: INFO: Pod "security-context-371f50fc-de13-4958-a316-f6bc87f95c4b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013314671s Jun 3 23:21:09.822: INFO: Pod "security-context-371f50fc-de13-4958-a316-f6bc87f95c4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.019288241s STEP: Saw pod success Jun 3 23:21:09.822: INFO: Pod "security-context-371f50fc-de13-4958-a316-f6bc87f95c4b" satisfied condition "Succeeded or Failed" Jun 3 23:21:09.824: INFO: Trying to get logs from node node2 pod security-context-371f50fc-de13-4958-a316-f6bc87f95c4b container test-container: STEP: delete the pod Jun 3 23:21:09.837: INFO: Waiting for pod security-context-371f50fc-de13-4958-a316-f6bc87f95c4b to disappear Jun 3 23:21:09.839: INFO: Pod security-context-371f50fc-de13-4958-a316-f6bc87f95c4b no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:21:09.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-8478" for this suite. • [SLOW TEST:8.076 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":9,"skipped":814,"failed":0} Jun 3 23:21:09.848: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:20:51.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:371 STEP: Creating pod startup-1690104a-f390-41b4-8948-781809c7e3da in namespace container-probe-3021 Jun 3 23:21:01.539: INFO: Started pod startup-1690104a-f390-41b4-8948-781809c7e3da in namespace container-probe-3021 STEP: checking the pod's current state and verifying that restartCount is present Jun 3 23:21:01.541: INFO: Initial restart count of pod startup-1690104a-f390-41b4-8948-781809c7e3da is 0 Jun 3 23:21:53.674: INFO: Restart count of pod container-probe-3021/startup-1690104a-f390-41b4-8948-781809c7e3da is now 1 (52.133196705s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:21:53.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3021" for this suite. • [SLOW TEST:62.197 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:371 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted by liveness probe after startup probe enables it","total":-1,"completed":3,"skipped":638,"failed":0} Jun 3 23:21:53.691: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:21:07.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a failing exec liveness probe that took longer than the timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:254 STEP: Creating pod busybox-583659d4-59d6-410f-a4f6-780287765aee in namespace container-probe-8507 Jun 3 23:21:11.397: INFO: Started pod busybox-583659d4-59d6-410f-a4f6-780287765aee in namespace container-probe-8507 STEP: checking the pod's current state and verifying that restartCount is present Jun 3 23:21:11.400: INFO: Initial restart count of pod busybox-583659d4-59d6-410f-a4f6-780287765aee is 0 Jun 3 23:22:01.527: INFO: Restart count of pod container-probe-8507/busybox-583659d4-59d6-410f-a4f6-780287765aee is now 1 (50.126994759s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:22:01.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8507" for this suite. • [SLOW TEST:54.196 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a failing exec liveness probe that took longer than the timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:254 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout","total":-1,"completed":9,"skipped":914,"failed":0} Jun 3 23:22:01.544: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:19:28.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:202 [It] should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 STEP: creating pods that should always exit 1 and terminating the pod after a random delay Jun 3 23:19:29.976: INFO: watch delete seen for pod-submit-status-0-0 Jun 3 23:19:29.976: INFO: Pod pod-submit-status-0-0 on node node1 timings total=1.26637396s t=189ms run=0s execute=0s Jun 3 23:19:33.056: INFO: watch delete seen for pod-submit-status-2-0 Jun 3 23:19:33.056: INFO: Pod pod-submit-status-2-0 on node node1 timings total=4.346075194s t=894ms run=0s execute=0s Jun 3 23:19:35.374: INFO: watch delete seen for pod-submit-status-0-1 Jun 3 23:19:35.374: INFO: Pod pod-submit-status-0-1 on node node1 timings total=5.3972959s t=1.797s run=0s execute=0s Jun 3 23:19:35.668: INFO: watch delete seen for pod-submit-status-1-0 Jun 3 23:19:35.669: INFO: Pod pod-submit-status-1-0 on node node2 timings total=6.958433942s t=1.415s run=0s execute=0s Jun 3 23:19:39.263: INFO: watch delete seen for pod-submit-status-2-1 Jun 3 23:19:39.263: INFO: Pod pod-submit-status-2-1 on node node2 timings total=6.206659911s t=517ms run=0s execute=0s Jun 3 23:19:47.263: INFO: watch delete seen for pod-submit-status-1-1 Jun 3 23:19:47.263: INFO: Pod pod-submit-status-1-1 on node node2 timings total=11.594274926s t=1.912s run=0s execute=0s Jun 3 23:19:47.569: INFO: watch delete seen for pod-submit-status-2-2 Jun 3 23:19:47.569: INFO: Pod pod-submit-status-2-2 on node node1 timings total=8.305922294s t=1.182s run=0s execute=0s Jun 3 23:19:47.663: INFO: watch delete seen for pod-submit-status-0-2 Jun 3 23:19:47.663: INFO: Pod pod-submit-status-0-2 on node node2 timings total=12.288831474s t=687ms run=0s execute=0s Jun 3 23:19:56.969: INFO: watch delete seen for pod-submit-status-0-3 Jun 3 23:19:56.969: INFO: Pod pod-submit-status-0-3 on node node1 timings total=9.306108686s t=318ms run=0s execute=0s Jun 3 23:19:57.368: INFO: watch delete seen for pod-submit-status-1-2 Jun 3 23:19:57.368: INFO: Pod pod-submit-status-1-2 on node node1 timings total=10.105303927s t=856ms run=0s execute=0s Jun 3 23:19:59.168: INFO: watch delete seen for pod-submit-status-2-3 Jun 3 23:19:59.169: INFO: Pod pod-submit-status-2-3 on node node1 timings total=11.599708422s t=1.96s run=0s execute=0s Jun 3 23:20:01.168: INFO: watch delete seen for pod-submit-status-0-4 Jun 3 23:20:01.168: INFO: Pod pod-submit-status-0-4 on node node1 timings total=4.199549423s t=530ms run=0s execute=0s Jun 3 23:20:06.298: INFO: watch delete seen for pod-submit-status-0-5 Jun 3 23:20:06.298: INFO: Pod pod-submit-status-0-5 on node node2 timings total=5.129199119s t=354ms run=0s execute=0s Jun 3 23:20:06.969: INFO: watch delete seen for pod-submit-status-1-3 Jun 3 23:20:06.969: INFO: Pod pod-submit-status-1-3 on node node1 timings total=9.600480878s t=1.063s run=0s execute=0s Jun 3 23:20:09.569: INFO: watch delete seen for pod-submit-status-2-4 Jun 3 23:20:09.569: INFO: Pod pod-submit-status-2-4 on node node1 timings total=10.400118645s t=426ms run=0s execute=0s Jun 3 23:20:12.464: INFO: watch delete seen for pod-submit-status-2-5 Jun 3 23:20:12.464: INFO: Pod pod-submit-status-2-5 on node node2 timings total=2.895650135s t=1.275s run=0s execute=0s Jun 3 23:20:20.183: INFO: watch delete seen for pod-submit-status-1-4 Jun 3 23:20:20.183: INFO: Pod pod-submit-status-1-4 on node node2 timings total=13.21404619s t=1.517s run=3s execute=0s Jun 3 23:20:22.136: INFO: watch delete seen for pod-submit-status-0-6 Jun 3 23:20:22.136: INFO: Pod pod-submit-status-0-6 on node node1 timings total=15.83800599s t=986ms run=0s execute=0s Jun 3 23:20:30.189: INFO: watch delete seen for pod-submit-status-0-7 Jun 3 23:20:30.189: INFO: Pod pod-submit-status-0-7 on node node2 timings total=8.053104061s t=1.387s run=2s execute=0s Jun 3 23:20:32.121: INFO: watch delete seen for pod-submit-status-1-5 Jun 3 23:20:32.121: INFO: Pod pod-submit-status-1-5 on node node1 timings total=11.938292665s t=1.895s run=0s execute=0s Jun 3 23:20:40.181: INFO: watch delete seen for pod-submit-status-1-6 Jun 3 23:20:40.182: INFO: Pod pod-submit-status-1-6 on node node2 timings total=8.060248837s t=99ms run=0s execute=0s Jun 3 23:20:42.355: INFO: watch delete seen for pod-submit-status-1-7 Jun 3 23:20:42.356: INFO: Pod pod-submit-status-1-7 on node node2 timings total=2.173955554s t=135ms run=0s execute=0s Jun 3 23:20:50.192: INFO: watch delete seen for pod-submit-status-0-8 Jun 3 23:20:50.192: INFO: Pod pod-submit-status-0-8 on node node2 timings total=20.003411622s t=589ms run=0s execute=0s Jun 3 23:20:52.517: INFO: watch delete seen for pod-submit-status-0-9 Jun 3 23:20:52.518: INFO: Pod pod-submit-status-0-9 on node node2 timings total=2.325015894s t=680ms run=0s execute=0s Jun 3 23:20:56.118: INFO: watch delete seen for pod-submit-status-0-10 Jun 3 23:20:56.118: INFO: Pod pod-submit-status-0-10 on node node2 timings total=3.600357101s t=620ms run=0s execute=0s Jun 3 23:21:01.379: INFO: watch delete seen for pod-submit-status-0-11 Jun 3 23:21:01.379: INFO: Pod pod-submit-status-0-11 on node node2 timings total=5.260873607s t=1.497s run=0s execute=0s Jun 3 23:21:03.794: INFO: watch delete seen for pod-submit-status-1-8 Jun 3 23:21:03.794: INFO: Pod pod-submit-status-1-8 on node node2 timings total=21.4387998s t=1.516s run=0s execute=0s Jun 3 23:21:12.123: INFO: watch delete seen for pod-submit-status-0-12 Jun 3 23:21:12.123: INFO: Pod pod-submit-status-0-12 on node node1 timings total=10.744463913s t=362ms run=0s execute=0s Jun 3 23:21:20.189: INFO: watch delete seen for pod-submit-status-1-9 Jun 3 23:21:20.189: INFO: Pod pod-submit-status-1-9 on node node2 timings total=16.394283673s t=818ms run=0s execute=0s Jun 3 23:21:20.197: INFO: watch delete seen for pod-submit-status-0-13 Jun 3 23:21:20.198: INFO: Pod pod-submit-status-0-13 on node node2 timings total=8.074037129s t=1.833s run=2s execute=0s Jun 3 23:21:30.189: INFO: watch delete seen for pod-submit-status-0-14 Jun 3 23:21:30.189: INFO: Pod pod-submit-status-0-14 on node node2 timings total=9.991768243s t=619ms run=0s execute=0s Jun 3 23:21:30.219: INFO: watch delete seen for pod-submit-status-1-10 Jun 3 23:21:30.219: INFO: Pod pod-submit-status-1-10 on node node2 timings total=10.030539014s t=1.726s run=0s execute=0s Jun 3 23:21:42.765: INFO: watch delete seen for pod-submit-status-1-11 Jun 3 23:21:42.765: INFO: Pod pod-submit-status-1-11 on node node1 timings total=12.545553637s t=575ms run=0s execute=0s Jun 3 23:21:43.239: INFO: watch delete seen for pod-submit-status-2-6 Jun 3 23:21:43.239: INFO: Pod pod-submit-status-2-6 on node node1 timings total=1m30.774898101s t=537ms run=0s execute=0s Jun 3 23:21:50.188: INFO: watch delete seen for pod-submit-status-1-12 Jun 3 23:21:50.188: INFO: Pod pod-submit-status-1-12 on node node2 timings total=7.42296665s t=1.995s run=2s execute=0s Jun 3 23:21:52.115: INFO: watch delete seen for pod-submit-status-2-7 Jun 3 23:21:52.115: INFO: Pod pod-submit-status-2-7 on node node1 timings total=8.876072966s t=1.416s run=0s execute=0s Jun 3 23:22:00.188: INFO: watch delete seen for pod-submit-status-1-13 Jun 3 23:22:00.188: INFO: Pod pod-submit-status-1-13 on node node2 timings total=10.000273286s t=1.139s run=2s execute=0s Jun 3 23:22:00.197: INFO: watch delete seen for pod-submit-status-2-8 Jun 3 23:22:00.197: INFO: Pod pod-submit-status-2-8 on node node2 timings total=8.081572506s t=383ms run=0s execute=0s Jun 3 23:22:10.192: INFO: watch delete seen for pod-submit-status-2-9 Jun 3 23:22:10.192: INFO: Pod pod-submit-status-2-9 on node node2 timings total=9.994614631s t=1.753s run=0s execute=0s Jun 3 23:22:12.119: INFO: watch delete seen for pod-submit-status-1-14 Jun 3 23:22:12.119: INFO: Pod pod-submit-status-1-14 on node node1 timings total=11.930477949s t=1.399s run=0s execute=0s Jun 3 23:22:22.126: INFO: watch delete seen for pod-submit-status-2-10 Jun 3 23:22:22.126: INFO: Pod pod-submit-status-2-10 on node node1 timings total=11.934317613s t=913ms run=0s execute=0s Jun 3 23:22:25.141: INFO: watch delete seen for pod-submit-status-2-11 Jun 3 23:22:25.141: INFO: Pod pod-submit-status-2-11 on node node1 timings total=3.014917689s t=527ms run=0s execute=0s Jun 3 23:22:40.195: INFO: watch delete seen for pod-submit-status-2-12 Jun 3 23:22:40.195: INFO: Pod pod-submit-status-2-12 on node node2 timings total=15.05398169s t=904ms run=0s execute=0s Jun 3 23:22:50.195: INFO: watch delete seen for pod-submit-status-2-13 Jun 3 23:22:50.195: INFO: Pod pod-submit-status-2-13 on node node2 timings total=10.000198661s t=971ms run=2s execute=0s Jun 3 23:23:02.125: INFO: watch delete seen for pod-submit-status-2-14 Jun 3 23:23:02.125: INFO: Pod pod-submit-status-2-14 on node node1 timings total=11.929062932s t=1.754s run=0s execute=0s [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:23:02.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2357" for this suite. • [SLOW TEST:213.446 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:200 should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pod Container Status should never report success for a pending container","total":-1,"completed":2,"skipped":463,"failed":0} Jun 3 23:23:02.137: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:19:18.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 STEP: wait until node is ready Jun 3 23:19:18.121: INFO: Waiting up to 5m0s for node node2 condition Ready to be true STEP: wait until there is node lease STEP: verify NodeStatus report period is longer than lease duration Jun 3 23:19:19.134: INFO: node status heartbeat is unchanged for 1.00499846s, waiting for 1m20s Jun 3 23:19:20.133: INFO: node status heartbeat is unchanged for 2.003497914s, waiting for 1m20s Jun 3 23:19:21.134: INFO: node status heartbeat is unchanged for 3.004623195s, waiting for 1m20s Jun 3 23:19:22.134: INFO: node status heartbeat is unchanged for 4.00479915s, waiting for 1m20s Jun 3 23:19:23.134: INFO: node status heartbeat is unchanged for 5.004490592s, waiting for 1m20s Jun 3 23:19:24.134: INFO: node status heartbeat is unchanged for 6.004849578s, waiting for 1m20s Jun 3 23:19:25.133: INFO: node status heartbeat is unchanged for 7.003568409s, waiting for 1m20s Jun 3 23:19:26.134: INFO: node status heartbeat is unchanged for 8.004076754s, waiting for 1m20s Jun 3 23:19:27.134: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 3 23:19:27.139: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:19:16 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:19:26 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:19:16 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:19:26 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:19:16 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:19:26 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-03 20:03:20 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Jun 3 23:19:28.134: INFO: node status heartbeat is unchanged for 999.913933ms, waiting for 1m20s Jun 3 23:19:29.134: INFO: node status heartbeat is unchanged for 2.000006826s, waiting for 1m20s Jun 3 23:19:30.136: INFO: node status heartbeat is unchanged for 3.001823836s, waiting for 1m20s Jun 3 23:19:31.133: INFO: node status heartbeat is unchanged for 3.999496616s, waiting for 1m20s Jun 3 23:19:32.133: INFO: node status heartbeat is unchanged for 4.999359873s, waiting for 1m20s Jun 3 23:19:33.136: INFO: node status heartbeat is unchanged for 6.002200322s, waiting for 1m20s Jun 3 23:19:34.135: INFO: node status heartbeat is unchanged for 7.001369921s, waiting for 1m20s Jun 3 23:19:35.133: INFO: node status heartbeat is unchanged for 7.999254516s, waiting for 1m20s Jun 3 23:19:36.133: INFO: node status heartbeat is unchanged for 8.999425897s, waiting for 1m20s Jun 3 23:19:37.134: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 3 23:19:37.138: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:19:26 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:19:36 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:19:26 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:19:36 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:19:26 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:19:36 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-03 20:03:20 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Jun 3 23:19:38.134: INFO: node status heartbeat is unchanged for 1.000535426s, waiting for 1m20s Jun 3 23:19:39.134: INFO: node status heartbeat is unchanged for 2.000714593s, waiting for 1m20s Jun 3 23:19:40.136: INFO: node status heartbeat is unchanged for 3.002401165s, waiting for 1m20s Jun 3 23:19:41.135: INFO: node status heartbeat is unchanged for 4.00100553s, waiting for 1m20s Jun 3 23:19:42.136: INFO: node status heartbeat is unchanged for 5.002078041s, waiting for 1m20s Jun 3 23:19:43.134: INFO: node status heartbeat is unchanged for 6.000385268s, waiting for 1m20s Jun 3 23:19:44.134: INFO: node status heartbeat is unchanged for 7.000910597s, waiting for 1m20s Jun 3 23:19:45.134: INFO: node status heartbeat is unchanged for 8.000111536s, waiting for 1m20s Jun 3 23:19:46.134: INFO: node status heartbeat is unchanged for 9.000042449s, waiting for 1m20s Jun 3 23:19:47.135: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 3 23:19:47.140: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:19:36 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:19:46 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:19:36 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:19:46 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:19:36 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:19:46 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-03 20:03:20 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Jun 3 23:19:48.135: INFO: node status heartbeat is unchanged for 1.000202988s, waiting for 1m20s Jun 3 23:19:49.136: INFO: node status heartbeat is unchanged for 2.001537623s, waiting for 1m20s Jun 3 23:19:50.134: INFO: node status heartbeat is unchanged for 2.998956618s, waiting for 1m20s Jun 3 23:19:51.134: INFO: node status heartbeat is unchanged for 3.999051233s, waiting for 1m20s Jun 3 23:19:52.159: INFO: node status heartbeat is unchanged for 5.024164344s, waiting for 1m20s Jun 3 23:19:53.134: INFO: node status heartbeat is unchanged for 5.999217457s, waiting for 1m20s Jun 3 23:19:54.135: INFO: node status heartbeat is unchanged for 7.00021461s, waiting for 1m20s Jun 3 23:19:55.136: INFO: node status heartbeat is unchanged for 8.00182302s, waiting for 1m20s Jun 3 23:19:56.134: INFO: node status heartbeat is unchanged for 8.999505657s, waiting for 1m20s Jun 3 23:19:57.136: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 3 23:19:57.140: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:19:46 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:19:56 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:19:46 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:19:56 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:19:46 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:19:56 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-03 20:03:20 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Jun 3 23:19:58.135: INFO: node status heartbeat is unchanged for 999.801279ms, waiting for 1m20s Jun 3 23:19:59.136: INFO: node status heartbeat is unchanged for 2.000528584s, waiting for 1m20s Jun 3 23:20:00.135: INFO: node status heartbeat is unchanged for 2.999252499s, waiting for 1m20s Jun 3 23:20:01.134: INFO: node status heartbeat is unchanged for 3.998781338s, waiting for 1m20s Jun 3 23:20:02.133: INFO: node status heartbeat is unchanged for 4.997719615s, waiting for 1m20s Jun 3 23:20:03.135: INFO: node status heartbeat is unchanged for 5.999759278s, waiting for 1m20s Jun 3 23:20:04.138: INFO: node status heartbeat is unchanged for 7.002628182s, waiting for 1m20s Jun 3 23:20:05.135: INFO: node status heartbeat is unchanged for 7.999685427s, waiting for 1m20s Jun 3 23:20:06.134: INFO: node status heartbeat is unchanged for 8.998816781s, waiting for 1m20s Jun 3 23:20:07.135: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 3 23:20:07.140: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:19:56 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:20:06 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:19:56 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:20:06 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:19:56 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:20:06 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-03 20:03:20 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Jun 3 23:20:08.136: INFO: node status heartbeat is unchanged for 1.000641386s, waiting for 1m20s Jun 3 23:20:09.136: INFO: node status heartbeat is unchanged for 2.000699815s, waiting for 1m20s Jun 3 23:20:10.135: INFO: node status heartbeat is unchanged for 2.999392049s, waiting for 1m20s Jun 3 23:20:11.134: INFO: node status heartbeat is unchanged for 3.99892229s, waiting for 1m20s Jun 3 23:20:12.136: INFO: node status heartbeat is unchanged for 5.000642904s, waiting for 1m20s Jun 3 23:20:13.133: INFO: node status heartbeat is unchanged for 5.997803774s, waiting for 1m20s Jun 3 23:20:14.135: INFO: node status heartbeat is unchanged for 6.999932489s, waiting for 1m20s Jun 3 23:20:15.137: INFO: node status heartbeat is unchanged for 8.001364589s, waiting for 1m20s Jun 3 23:20:16.134: INFO: node status heartbeat is unchanged for 8.998806015s, waiting for 1m20s Jun 3 23:20:17.133: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 3 23:20:17.138: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:20:06 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:20:16 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:20:06 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:20:16 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:20:06 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:20:16 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-03 20:03:20 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Jun 3 23:20:18.134: INFO: node status heartbeat is unchanged for 1.00105864s, waiting for 1m20s Jun 3 23:20:19.135: INFO: node status heartbeat is unchanged for 2.00210045s, waiting for 1m20s Jun 3 23:20:20.133: INFO: node status heartbeat is unchanged for 3.000187579s, waiting for 1m20s Jun 3 23:20:21.134: INFO: node status heartbeat is unchanged for 4.000693597s, waiting for 1m20s Jun 3 23:20:22.135: INFO: node status heartbeat is unchanged for 5.001684654s, waiting for 1m20s Jun 3 23:20:23.135: INFO: node status heartbeat is unchanged for 6.001332071s, waiting for 1m20s Jun 3 23:20:24.137: INFO: node status heartbeat is unchanged for 7.004102694s, waiting for 1m20s Jun 3 23:20:25.134: INFO: node status heartbeat is unchanged for 8.000492998s, waiting for 1m20s Jun 3 23:20:26.134: INFO: node status heartbeat is unchanged for 9.000363132s, waiting for 1m20s Jun 3 23:20:27.137: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 3 23:20:27.142: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:20:16 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:20:26 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:20:16 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:20:26 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:20:16 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:20:26 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-03 20:03:20 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, NodeInfo: {MachineID: "73f6f7c4482d4ddfadf38b35a5d03575", SystemUUID: "80B3CD56-852F-E711-906E-0017A4403562", BootID: "14b04379-324d-413e-8b7f-b1dff077c955", KernelVersion: "3.10.0-1160.66.1.el7.x86_64", ...}, Images: []v1.ContainerImage{ ... // 32 identical elements {Names: {"k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf"..., "k8s.gcr.io/e2e-test-images/nonewprivs:1.3"}, SizeBytes: 7107254}, {Names: {"appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172"..., "appropriate/curl:edge"}, SizeBytes: 5654234}, + { + Names: []string{ + "gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c6"..., + "gcr.io/authenticated-image-pulling/alpine:3.7", + }, + SizeBytes: 4206620, + }, {Names: {"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad"..., "k8s.gcr.io/e2e-test-images/busybox:1.29-1"}, SizeBytes: 1154361}, {Names: {"busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea"..., "busybox:1.28"}, SizeBytes: 1146369}, ... // 2 identical elements }, VolumesInUse: nil, VolumesAttached: nil, Config: nil, } Jun 3 23:20:28.134: INFO: node status heartbeat is unchanged for 996.848109ms, waiting for 1m20s Jun 3 23:20:29.134: INFO: node status heartbeat is unchanged for 1.997334802s, waiting for 1m20s Jun 3 23:20:30.132: INFO: node status heartbeat is unchanged for 2.995699383s, waiting for 1m20s Jun 3 23:20:31.135: INFO: node status heartbeat is unchanged for 3.998016445s, waiting for 1m20s Jun 3 23:20:32.139: INFO: node status heartbeat is unchanged for 5.00235897s, waiting for 1m20s Jun 3 23:20:33.134: INFO: node status heartbeat is unchanged for 5.997100962s, waiting for 1m20s Jun 3 23:20:34.134: INFO: node status heartbeat is unchanged for 6.99687947s, waiting for 1m20s Jun 3 23:20:35.133: INFO: node status heartbeat is unchanged for 7.996747063s, waiting for 1m20s Jun 3 23:20:36.134: INFO: node status heartbeat is unchanged for 8.996958826s, waiting for 1m20s Jun 3 23:20:37.134: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s Jun 3 23:20:37.139: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:20:26 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:20:37 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:20:26 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:20:37 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:20:26 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:20:37 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-03 20:03:20 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Jun 3 23:20:38.134: INFO: node status heartbeat is unchanged for 999.627743ms, waiting for 1m20s Jun 3 23:20:39.135: INFO: node status heartbeat is unchanged for 2.000734762s, waiting for 1m20s Jun 3 23:20:40.134: INFO: node status heartbeat is unchanged for 2.999578618s, waiting for 1m20s Jun 3 23:20:41.133: INFO: node status heartbeat is unchanged for 3.999182734s, waiting for 1m20s Jun 3 23:20:42.134: INFO: node status heartbeat is unchanged for 4.999693311s, waiting for 1m20s Jun 3 23:20:43.133: INFO: node status heartbeat is unchanged for 5.999387566s, waiting for 1m20s Jun 3 23:20:44.136: INFO: node status heartbeat is unchanged for 7.00225324s, waiting for 1m20s Jun 3 23:20:45.133: INFO: node status heartbeat is unchanged for 7.998888446s, waiting for 1m20s Jun 3 23:20:46.134: INFO: node status heartbeat is unchanged for 9.000109147s, waiting for 1m20s Jun 3 23:20:47.133: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 3 23:20:47.137: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:20:37 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:20:47 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:20:37 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:20:47 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:20:37 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:20:47 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-03 20:03:20 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Jun 3 23:20:48.133: INFO: node status heartbeat is unchanged for 1.000354606s, waiting for 1m20s Jun 3 23:20:49.134: INFO: node status heartbeat is unchanged for 2.001519741s, waiting for 1m20s Jun 3 23:20:50.133: INFO: node status heartbeat is unchanged for 3.000139309s, waiting for 1m20s Jun 3 23:20:51.135: INFO: node status heartbeat is unchanged for 4.002096227s, waiting for 1m20s Jun 3 23:20:52.133: INFO: node status heartbeat is unchanged for 5.000662127s, waiting for 1m20s Jun 3 23:20:53.133: INFO: node status heartbeat is unchanged for 6.000848817s, waiting for 1m20s Jun 3 23:20:54.133: INFO: node status heartbeat is unchanged for 7.000712409s, waiting for 1m20s Jun 3 23:20:55.133: INFO: node status heartbeat is unchanged for 8.001042437s, waiting for 1m20s Jun 3 23:20:56.133: INFO: node status heartbeat is unchanged for 9.000394832s, waiting for 1m20s Jun 3 23:20:57.134: INFO: node status heartbeat is unchanged for 10.001815729s, waiting for 1m20s Jun 3 23:20:58.133: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 3 23:20:58.138: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:20:47 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:20:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:20:47 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:20:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:20:47 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:20:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-03 20:03:20 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Jun 3 23:20:59.133: INFO: node status heartbeat is unchanged for 1.000061594s, waiting for 1m20s Jun 3 23:21:00.134: INFO: node status heartbeat is unchanged for 2.000406132s, waiting for 1m20s Jun 3 23:21:01.134: INFO: node status heartbeat is unchanged for 3.000637082s, waiting for 1m20s Jun 3 23:21:02.135: INFO: node status heartbeat is unchanged for 4.002160433s, waiting for 1m20s Jun 3 23:21:03.134: INFO: node status heartbeat is unchanged for 5.000325072s, waiting for 1m20s Jun 3 23:21:04.136: INFO: node status heartbeat is unchanged for 6.002684117s, waiting for 1m20s Jun 3 23:21:05.134: INFO: node status heartbeat is unchanged for 7.001099585s, waiting for 1m20s Jun 3 23:21:06.133: INFO: node status heartbeat is unchanged for 8.000057221s, waiting for 1m20s Jun 3 23:21:07.133: INFO: node status heartbeat is unchanged for 9.000273399s, waiting for 1m20s Jun 3 23:21:08.137: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 3 23:21:08.142: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:20:57 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:21:07 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:20:57 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:21:07 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:20:57 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:21:07 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-03 20:03:20 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Jun 3 23:21:09.134: INFO: node status heartbeat is unchanged for 997.306087ms, waiting for 1m20s Jun 3 23:21:10.133: INFO: node status heartbeat is unchanged for 1.996357423s, waiting for 1m20s Jun 3 23:21:11.134: INFO: node status heartbeat is unchanged for 2.996661466s, waiting for 1m20s Jun 3 23:21:12.135: INFO: node status heartbeat is unchanged for 3.997828374s, waiting for 1m20s Jun 3 23:21:13.134: INFO: node status heartbeat is unchanged for 4.997001911s, waiting for 1m20s Jun 3 23:21:14.135: INFO: node status heartbeat is unchanged for 5.998061989s, waiting for 1m20s Jun 3 23:21:15.133: INFO: node status heartbeat is unchanged for 6.996521793s, waiting for 1m20s Jun 3 23:21:16.134: INFO: node status heartbeat is unchanged for 7.997172297s, waiting for 1m20s Jun 3 23:21:17.136: INFO: node status heartbeat is unchanged for 8.999280922s, waiting for 1m20s Jun 3 23:21:18.135: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 3 23:21:18.140: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:21:07 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:21:17 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:21:07 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:21:17 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:21:07 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:21:17 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-03 20:03:20 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Jun 3 23:21:19.134: INFO: node status heartbeat is unchanged for 998.804501ms, waiting for 1m20s Jun 3 23:21:20.135: INFO: node status heartbeat is unchanged for 2.000438962s, waiting for 1m20s Jun 3 23:21:21.135: INFO: node status heartbeat is unchanged for 2.999756811s, waiting for 1m20s Jun 3 23:21:22.135: INFO: node status heartbeat is unchanged for 4.000241194s, waiting for 1m20s Jun 3 23:21:23.134: INFO: node status heartbeat is unchanged for 4.999360044s, waiting for 1m20s Jun 3 23:21:24.137: INFO: node status heartbeat is unchanged for 6.002423706s, waiting for 1m20s Jun 3 23:21:25.133: INFO: node status heartbeat is unchanged for 6.998487826s, waiting for 1m20s Jun 3 23:21:26.133: INFO: node status heartbeat is unchanged for 7.99847909s, waiting for 1m20s Jun 3 23:21:27.135: INFO: node status heartbeat is unchanged for 8.999584673s, waiting for 1m20s Jun 3 23:21:28.135: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 3 23:21:28.139: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:21:17 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:21:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:21:17 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:21:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:21:17 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:21:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-03 20:03:20 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Jun 3 23:21:29.137: INFO: node status heartbeat is unchanged for 1.00204333s, waiting for 1m20s Jun 3 23:21:30.134: INFO: node status heartbeat is unchanged for 1.999165699s, waiting for 1m20s Jun 3 23:21:31.135: INFO: node status heartbeat is unchanged for 3.000168031s, waiting for 1m20s Jun 3 23:21:32.135: INFO: node status heartbeat is unchanged for 4.000564515s, waiting for 1m20s Jun 3 23:21:33.134: INFO: node status heartbeat is unchanged for 4.998988037s, waiting for 1m20s Jun 3 23:21:34.135: INFO: node status heartbeat is unchanged for 6.000649333s, waiting for 1m20s Jun 3 23:21:35.137: INFO: node status heartbeat is unchanged for 7.002166101s, waiting for 1m20s Jun 3 23:21:36.134: INFO: node status heartbeat is unchanged for 7.999379653s, waiting for 1m20s Jun 3 23:21:37.135: INFO: node status heartbeat is unchanged for 9.000651506s, waiting for 1m20s Jun 3 23:21:38.135: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 3 23:21:38.139: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:21:27 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:21:37 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:21:27 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:21:37 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:21:27 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:21:37 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-03 20:03:20 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Jun 3 23:21:39.135: INFO: node status heartbeat is unchanged for 999.972601ms, waiting for 1m20s Jun 3 23:21:40.136: INFO: node status heartbeat is unchanged for 2.001083968s, waiting for 1m20s Jun 3 23:21:41.133: INFO: node status heartbeat is unchanged for 2.997976521s, waiting for 1m20s Jun 3 23:21:42.134: INFO: node status heartbeat is unchanged for 3.999270608s, waiting for 1m20s Jun 3 23:21:43.135: INFO: node status heartbeat is unchanged for 5.000079753s, waiting for 1m20s Jun 3 23:21:44.134: INFO: node status heartbeat is unchanged for 5.999337773s, waiting for 1m20s Jun 3 23:21:45.134: INFO: node status heartbeat is unchanged for 6.999122784s, waiting for 1m20s Jun 3 23:21:46.136: INFO: node status heartbeat is unchanged for 8.000956694s, waiting for 1m20s Jun 3 23:21:47.133: INFO: node status heartbeat is unchanged for 8.998364806s, waiting for 1m20s Jun 3 23:21:48.134: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 3 23:21:48.138: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:21:37 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:21:47 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:21:37 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:21:47 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:21:37 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:21:47 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-03 20:03:20 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Jun 3 23:21:49.135: INFO: node status heartbeat is unchanged for 1.001396892s, waiting for 1m20s Jun 3 23:21:50.134: INFO: node status heartbeat is unchanged for 1.999783751s, waiting for 1m20s Jun 3 23:21:51.135: INFO: node status heartbeat is unchanged for 3.001556179s, waiting for 1m20s Jun 3 23:21:52.133: INFO: node status heartbeat is unchanged for 3.999180931s, waiting for 1m20s Jun 3 23:21:53.134: INFO: node status heartbeat is unchanged for 5.000447941s, waiting for 1m20s Jun 3 23:21:54.134: INFO: node status heartbeat is unchanged for 6.000487812s, waiting for 1m20s Jun 3 23:21:55.134: INFO: node status heartbeat is unchanged for 6.999881048s, waiting for 1m20s Jun 3 23:21:56.133: INFO: node status heartbeat is unchanged for 7.999474154s, waiting for 1m20s Jun 3 23:21:57.134: INFO: node status heartbeat is unchanged for 8.999853987s, waiting for 1m20s Jun 3 23:21:58.135: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 3 23:21:58.139: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:21:47 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:21:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:21:47 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:21:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:21:47 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:21:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-03 20:03:20 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Jun 3 23:21:59.133: INFO: node status heartbeat is unchanged for 998.810428ms, waiting for 1m20s Jun 3 23:22:00.135: INFO: node status heartbeat is unchanged for 2.000506701s, waiting for 1m20s Jun 3 23:22:01.135: INFO: node status heartbeat is unchanged for 3.000130332s, waiting for 1m20s Jun 3 23:22:02.135: INFO: node status heartbeat is unchanged for 4.000716245s, waiting for 1m20s Jun 3 23:22:03.135: INFO: node status heartbeat is unchanged for 5.000164525s, waiting for 1m20s Jun 3 23:22:04.137: INFO: node status heartbeat is unchanged for 6.002241942s, waiting for 1m20s Jun 3 23:22:05.134: INFO: node status heartbeat is unchanged for 6.999927963s, waiting for 1m20s Jun 3 23:22:06.134: INFO: node status heartbeat is unchanged for 7.999423856s, waiting for 1m20s Jun 3 23:22:07.134: INFO: node status heartbeat is unchanged for 8.999943172s, waiting for 1m20s Jun 3 23:22:08.136: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 3 23:22:08.140: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:21:57 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:22:07 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:21:57 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:22:07 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:21:57 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:22:07 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-03 20:03:20 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Jun 3 23:22:09.133: INFO: node status heartbeat is unchanged for 997.822499ms, waiting for 1m20s Jun 3 23:22:10.136: INFO: node status heartbeat is unchanged for 2.000312854s, waiting for 1m20s Jun 3 23:22:11.134: INFO: node status heartbeat is unchanged for 2.998678471s, waiting for 1m20s Jun 3 23:22:12.134: INFO: node status heartbeat is unchanged for 3.998486743s, waiting for 1m20s Jun 3 23:22:13.134: INFO: node status heartbeat is unchanged for 4.998611593s, waiting for 1m20s Jun 3 23:22:14.136: INFO: node status heartbeat is unchanged for 6.000161519s, waiting for 1m20s Jun 3 23:22:15.135: INFO: node status heartbeat is unchanged for 6.999000962s, waiting for 1m20s Jun 3 23:22:16.134: INFO: node status heartbeat is unchanged for 7.998312273s, waiting for 1m20s Jun 3 23:22:17.135: INFO: node status heartbeat is unchanged for 8.999046277s, waiting for 1m20s Jun 3 23:22:18.134: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 3 23:22:18.139: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:22:07 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:22:17 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:22:07 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:22:17 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:22:07 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:22:17 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-03 20:03:20 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Jun 3 23:22:19.134: INFO: node status heartbeat is unchanged for 999.42865ms, waiting for 1m20s Jun 3 23:22:20.134: INFO: node status heartbeat is unchanged for 2.000247998s, waiting for 1m20s Jun 3 23:22:21.134: INFO: node status heartbeat is unchanged for 2.999438735s, waiting for 1m20s Jun 3 23:22:22.134: INFO: node status heartbeat is unchanged for 3.999686912s, waiting for 1m20s Jun 3 23:22:23.134: INFO: node status heartbeat is unchanged for 4.99954893s, waiting for 1m20s Jun 3 23:22:24.134: INFO: node status heartbeat is unchanged for 5.999641378s, waiting for 1m20s Jun 3 23:22:25.133: INFO: node status heartbeat is unchanged for 6.999086971s, waiting for 1m20s Jun 3 23:22:26.134: INFO: node status heartbeat is unchanged for 7.999476458s, waiting for 1m20s Jun 3 23:22:27.134: INFO: node status heartbeat is unchanged for 8.999496273s, waiting for 1m20s Jun 3 23:22:28.135: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 3 23:22:28.140: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:22:17 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:22:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:22:17 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:22:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:22:17 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:22:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-03 20:03:20 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Jun 3 23:22:29.135: INFO: node status heartbeat is unchanged for 999.774393ms, waiting for 1m20s Jun 3 23:22:30.134: INFO: node status heartbeat is unchanged for 1.998713451s, waiting for 1m20s Jun 3 23:22:31.134: INFO: node status heartbeat is unchanged for 2.999316225s, waiting for 1m20s Jun 3 23:22:32.136: INFO: node status heartbeat is unchanged for 4.001266367s, waiting for 1m20s Jun 3 23:22:33.134: INFO: node status heartbeat is unchanged for 4.998640377s, waiting for 1m20s Jun 3 23:22:34.137: INFO: node status heartbeat is unchanged for 6.001867955s, waiting for 1m20s Jun 3 23:22:35.136: INFO: node status heartbeat is unchanged for 7.000467013s, waiting for 1m20s Jun 3 23:22:36.134: INFO: node status heartbeat is unchanged for 7.999209391s, waiting for 1m20s Jun 3 23:22:37.134: INFO: node status heartbeat is unchanged for 8.998710956s, waiting for 1m20s Jun 3 23:22:38.135: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s Jun 3 23:22:38.139: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:22:27 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:22:38 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:22:27 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:22:38 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:22:27 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:22:38 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-03 20:03:20 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Jun 3 23:22:39.136: INFO: node status heartbeat is unchanged for 1.001504712s, waiting for 1m20s Jun 3 23:22:40.136: INFO: node status heartbeat is unchanged for 2.001134823s, waiting for 1m20s Jun 3 23:22:41.134: INFO: node status heartbeat is unchanged for 2.999452291s, waiting for 1m20s Jun 3 23:22:42.135: INFO: node status heartbeat is unchanged for 4.000121784s, waiting for 1m20s Jun 3 23:22:43.135: INFO: node status heartbeat is unchanged for 5.000096628s, waiting for 1m20s Jun 3 23:22:44.137: INFO: node status heartbeat is unchanged for 6.001977804s, waiting for 1m20s Jun 3 23:22:45.134: INFO: node status heartbeat is unchanged for 6.999576895s, waiting for 1m20s Jun 3 23:22:46.134: INFO: node status heartbeat is unchanged for 7.998969459s, waiting for 1m20s Jun 3 23:22:47.136: INFO: node status heartbeat is unchanged for 9.00104084s, waiting for 1m20s Jun 3 23:22:48.135: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 3 23:22:48.140: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:22:38 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:22:48 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:22:38 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:22:48 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:22:38 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:22:48 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-03 20:03:20 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Jun 3 23:22:49.135: INFO: node status heartbeat is unchanged for 999.266573ms, waiting for 1m20s Jun 3 23:22:50.135: INFO: node status heartbeat is unchanged for 1.999954251s, waiting for 1m20s Jun 3 23:22:51.134: INFO: node status heartbeat is unchanged for 2.998964125s, waiting for 1m20s Jun 3 23:22:52.135: INFO: node status heartbeat is unchanged for 3.999735575s, waiting for 1m20s Jun 3 23:22:53.134: INFO: node status heartbeat is unchanged for 4.998466914s, waiting for 1m20s Jun 3 23:22:54.135: INFO: node status heartbeat is unchanged for 5.999542799s, waiting for 1m20s Jun 3 23:22:55.134: INFO: node status heartbeat is unchanged for 6.998308419s, waiting for 1m20s Jun 3 23:22:56.135: INFO: node status heartbeat is unchanged for 7.999128213s, waiting for 1m20s Jun 3 23:22:57.134: INFO: node status heartbeat is unchanged for 8.99841589s, waiting for 1m20s Jun 3 23:22:58.135: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 3 23:22:58.139: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:22:48 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:22:58 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:22:48 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:22:58 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:22:48 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:22:58 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-03 20:03:20 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Jun 3 23:22:59.134: INFO: node status heartbeat is unchanged for 999.323496ms, waiting for 1m20s Jun 3 23:23:00.135: INFO: node status heartbeat is unchanged for 2.000361391s, waiting for 1m20s Jun 3 23:23:01.133: INFO: node status heartbeat is unchanged for 2.998703274s, waiting for 1m20s Jun 3 23:23:02.135: INFO: node status heartbeat is unchanged for 4.000596626s, waiting for 1m20s Jun 3 23:23:03.134: INFO: node status heartbeat is unchanged for 4.999342182s, waiting for 1m20s Jun 3 23:23:04.136: INFO: node status heartbeat is unchanged for 6.00142278s, waiting for 1m20s Jun 3 23:23:05.136: INFO: node status heartbeat is unchanged for 7.000733658s, waiting for 1m20s Jun 3 23:23:06.135: INFO: node status heartbeat is unchanged for 8.00010957s, waiting for 1m20s Jun 3 23:23:07.137: INFO: node status heartbeat is unchanged for 9.002563463s, waiting for 1m20s Jun 3 23:23:08.133: INFO: node status heartbeat is unchanged for 9.99859772s, waiting for 1m20s Jun 3 23:23:09.138: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 3 23:23:09.143: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:22:58 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:23:08 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:22:58 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:23:08 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:22:58 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:23:08 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-03 20:03:20 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Jun 3 23:23:10.135: INFO: node status heartbeat is unchanged for 996.855571ms, waiting for 1m20s Jun 3 23:23:11.133: INFO: node status heartbeat is unchanged for 1.995247526s, waiting for 1m20s Jun 3 23:23:12.134: INFO: node status heartbeat is unchanged for 2.996186523s, waiting for 1m20s Jun 3 23:23:13.135: INFO: node status heartbeat is unchanged for 3.996981183s, waiting for 1m20s Jun 3 23:23:14.134: INFO: node status heartbeat is unchanged for 4.996532814s, waiting for 1m20s Jun 3 23:23:15.134: INFO: node status heartbeat is unchanged for 5.995704548s, waiting for 1m20s Jun 3 23:23:16.134: INFO: node status heartbeat is unchanged for 6.995793896s, waiting for 1m20s Jun 3 23:23:17.134: INFO: node status heartbeat is unchanged for 7.996478002s, waiting for 1m20s Jun 3 23:23:18.134: INFO: node status heartbeat is unchanged for 8.996028095s, waiting for 1m20s Jun 3 23:23:19.135: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 3 23:23:19.139: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:23:08 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:23:18 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:23:08 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:23:18 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:23:08 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:23:18 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-03 20:03:20 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Jun 3 23:23:20.134: INFO: node status heartbeat is unchanged for 999.370222ms, waiting for 1m20s Jun 3 23:23:21.134: INFO: node status heartbeat is unchanged for 1.999533495s, waiting for 1m20s Jun 3 23:23:22.134: INFO: node status heartbeat is unchanged for 2.999167075s, waiting for 1m20s Jun 3 23:23:23.133: INFO: node status heartbeat is unchanged for 3.998360149s, waiting for 1m20s Jun 3 23:23:24.134: INFO: node status heartbeat is unchanged for 4.999136928s, waiting for 1m20s Jun 3 23:23:25.134: INFO: node status heartbeat is unchanged for 5.99967717s, waiting for 1m20s Jun 3 23:23:26.134: INFO: node status heartbeat is unchanged for 6.999195488s, waiting for 1m20s Jun 3 23:23:27.134: INFO: node status heartbeat is unchanged for 7.999463351s, waiting for 1m20s Jun 3 23:23:28.135: INFO: node status heartbeat is unchanged for 8.999941864s, waiting for 1m20s Jun 3 23:23:29.137: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 3 23:23:29.141: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:23:18 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:23:28 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:23:18 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:23:28 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:23:18 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:23:28 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-03 20:03:20 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Jun 3 23:23:30.135: INFO: node status heartbeat is unchanged for 998.207586ms, waiting for 1m20s Jun 3 23:23:31.134: INFO: node status heartbeat is unchanged for 1.997882229s, waiting for 1m20s Jun 3 23:23:32.134: INFO: node status heartbeat is unchanged for 2.997305993s, waiting for 1m20s Jun 3 23:23:33.134: INFO: node status heartbeat is unchanged for 3.99762488s, waiting for 1m20s Jun 3 23:23:34.133: INFO: node status heartbeat is unchanged for 4.996639499s, waiting for 1m20s Jun 3 23:23:35.133: INFO: node status heartbeat is unchanged for 5.996429552s, waiting for 1m20s Jun 3 23:23:36.135: INFO: node status heartbeat is unchanged for 6.997948443s, waiting for 1m20s Jun 3 23:23:37.135: INFO: node status heartbeat is unchanged for 7.998898669s, waiting for 1m20s Jun 3 23:23:38.135: INFO: node status heartbeat is unchanged for 8.998345008s, waiting for 1m20s Jun 3 23:23:39.136: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 3 23:23:39.141: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:23:28 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:23:38 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:23:28 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:23:38 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:23:28 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:23:38 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-03 20:03:20 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Jun 3 23:23:40.135: INFO: node status heartbeat is unchanged for 998.58606ms, waiting for 1m20s Jun 3 23:23:41.134: INFO: node status heartbeat is unchanged for 1.998035244s, waiting for 1m20s Jun 3 23:23:42.135: INFO: node status heartbeat is unchanged for 2.99894767s, waiting for 1m20s Jun 3 23:23:43.134: INFO: node status heartbeat is unchanged for 3.998296095s, waiting for 1m20s Jun 3 23:23:44.136: INFO: node status heartbeat is unchanged for 5.000439222s, waiting for 1m20s Jun 3 23:23:45.136: INFO: node status heartbeat is unchanged for 5.999714164s, waiting for 1m20s Jun 3 23:23:46.134: INFO: node status heartbeat is unchanged for 6.998173429s, waiting for 1m20s Jun 3 23:23:47.136: INFO: node status heartbeat is unchanged for 7.999907063s, waiting for 1m20s Jun 3 23:23:48.134: INFO: node status heartbeat is unchanged for 8.998507372s, waiting for 1m20s Jun 3 23:23:49.137: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 3 23:23:49.141: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:23:38 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:23:48 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:23:38 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:23:48 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:23:38 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:23:48 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-03 20:03:20 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Jun 3 23:23:50.136: INFO: node status heartbeat is unchanged for 998.973501ms, waiting for 1m20s Jun 3 23:23:51.134: INFO: node status heartbeat is unchanged for 1.997587916s, waiting for 1m20s Jun 3 23:23:52.135: INFO: node status heartbeat is unchanged for 2.997897301s, waiting for 1m20s Jun 3 23:23:53.133: INFO: node status heartbeat is unchanged for 3.996239978s, waiting for 1m20s Jun 3 23:23:54.134: INFO: node status heartbeat is unchanged for 4.996973566s, waiting for 1m20s Jun 3 23:23:55.133: INFO: node status heartbeat is unchanged for 5.996837559s, waiting for 1m20s Jun 3 23:23:56.134: INFO: node status heartbeat is unchanged for 6.997084401s, waiting for 1m20s Jun 3 23:23:57.136: INFO: node status heartbeat is unchanged for 7.999483388s, waiting for 1m20s Jun 3 23:23:58.135: INFO: node status heartbeat is unchanged for 8.99833534s, waiting for 1m20s Jun 3 23:23:59.136: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 3 23:23:59.140: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:23:48 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:23:58 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:23:48 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:23:58 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:23:48 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:23:58 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-03 20:03:20 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Jun 3 23:24:00.136: INFO: node status heartbeat is unchanged for 1.000659979s, waiting for 1m20s Jun 3 23:24:01.134: INFO: node status heartbeat is unchanged for 1.998085726s, waiting for 1m20s Jun 3 23:24:02.134: INFO: node status heartbeat is unchanged for 2.998215887s, waiting for 1m20s Jun 3 23:24:03.157: INFO: node status heartbeat is unchanged for 4.021476285s, waiting for 1m20s Jun 3 23:24:04.135: INFO: node status heartbeat is unchanged for 4.999896437s, waiting for 1m20s Jun 3 23:24:05.135: INFO: node status heartbeat is unchanged for 5.999135397s, waiting for 1m20s Jun 3 23:24:06.134: INFO: node status heartbeat is unchanged for 6.9984641s, waiting for 1m20s Jun 3 23:24:07.136: INFO: node status heartbeat is unchanged for 8.000067544s, waiting for 1m20s Jun 3 23:24:08.135: INFO: node status heartbeat is unchanged for 8.999335558s, waiting for 1m20s Jun 3 23:24:09.137: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Jun 3 23:24:09.141: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 20:03:25 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:23:58 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:24:08 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:23:58 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:24:08 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:23:58 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-06-03 23:24:08 +0000 UTC"}, LastTransitionTime: {Time: s"2022-06-03 19:59:32 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-06-03 20:03:20 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Jun 3 23:24:10.134: INFO: node status heartbeat is unchanged for 997.337676ms, waiting for 1m20s Jun 3 23:24:11.134: INFO: node status heartbeat is unchanged for 1.997696857s, waiting for 1m20s Jun 3 23:24:12.136: INFO: node status heartbeat is unchanged for 2.999275238s, waiting for 1m20s Jun 3 23:24:13.135: INFO: node status heartbeat is unchanged for 3.998828091s, waiting for 1m20s Jun 3 23:24:14.134: INFO: node status heartbeat is unchanged for 4.997353208s, waiting for 1m20s Jun 3 23:24:15.135: INFO: node status heartbeat is unchanged for 5.998243406s, waiting for 1m20s Jun 3 23:24:16.134: INFO: node status heartbeat is unchanged for 6.997520893s, waiting for 1m20s Jun 3 23:24:17.135: INFO: node status heartbeat is unchanged for 7.998729949s, waiting for 1m20s Jun 3 23:24:18.133: INFO: node status heartbeat is unchanged for 8.996781193s, waiting for 1m20s Jun 3 23:24:18.137: INFO: node status heartbeat is unchanged for 8.999994178s, waiting for 1m20s STEP: verify node is still in ready status even though node status report is infrequent [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:24:18.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-4862" for this suite. • [SLOW TEST:300.055 seconds] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when the NodeLease feature is enabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:49 the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently","total":-1,"completed":3,"skipped":158,"failed":0} Jun 3 23:24:18.156: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:20:54.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:342 STEP: Creating pod startup-3810c600-75f1-404e-acc2-d11baa1c9d3c in namespace container-probe-135 Jun 3 23:21:04.736: INFO: Started pod startup-3810c600-75f1-404e-acc2-d11baa1c9d3c in namespace container-probe-135 STEP: checking the pod's current state and verifying that restartCount is present Jun 3 23:21:04.740: INFO: Initial restart count of pod startup-3810c600-75f1-404e-acc2-d11baa1c9d3c is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:25:05.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-135" for this suite. • [SLOW TEST:250.757 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:342 ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:20:56.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 STEP: Creating pod liveness-539ea4fb-aa10-443b-8f13-a9402e957e58 in namespace container-probe-8665 Jun 3 23:21:08.466: INFO: Started pod liveness-539ea4fb-aa10-443b-8f13-a9402e957e58 in namespace container-probe-8665 STEP: checking the pod's current state and verifying that restartCount is present Jun 3 23:21:08.469: INFO: Initial restart count of pod liveness-539ea4fb-aa10-443b-8f13-a9402e957e58 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:25:09.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8665" for this suite. • [SLOW TEST:253.038 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe","total":-1,"completed":4,"skipped":221,"failed":0} Jun 3 23:25:09.468: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":3,"skipped":190,"failed":0} [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:19:50.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:681 Jun 3 23:19:50.308: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Jun 3 23:19:52.311: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Jun 3 23:19:54.312: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Jun 3 23:19:56.312: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Jun 3 23:19:58.312: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Jun 3 23:20:00.313: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Jun 3 23:20:02.311: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Jun 3 23:20:04.312: INFO: The status of Pod pod-back-off-image is Running (Ready = true) STEP: getting restart delay-0 Jun 3 23:21:48.510: INFO: getRestartDelay: restartCount = 4, finishedAt=2022-06-03 23:20:58 +0000 UTC restartedAt=2022-06-03 23:21:46 +0000 UTC (48s) STEP: getting restart delay-1 Jun 3 23:23:21.968: INFO: getRestartDelay: restartCount = 5, finishedAt=2022-06-03 23:21:51 +0000 UTC restartedAt=2022-06-03 23:23:20 +0000 UTC (1m29s) STEP: getting restart delay-2 Jun 3 23:26:07.735: INFO: getRestartDelay: restartCount = 6, finishedAt=2022-06-03 23:23:25 +0000 UTC restartedAt=2022-06-03 23:26:06 +0000 UTC (2m41s) STEP: updating the image Jun 3 23:26:08.244: INFO: Successfully updated pod "pod-back-off-image" STEP: get restart delay after image update Jun 3 23:26:30.302: INFO: getRestartDelay: restartCount = 8, finishedAt=2022-06-03 23:26:17 +0000 UTC restartedAt=2022-06-03 23:26:29 +0000 UTC (12s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:26:30.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2244" for this suite. • [SLOW TEST:400.041 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:681 ------------------------------ {"msg":"PASSED [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]","total":-1,"completed":4,"skipped":190,"failed":0} Jun 3 23:26:30.313: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:19:57.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:722 Jun 3 23:19:57.918: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Jun 3 23:19:59.923: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Jun 3 23:20:01.923: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Jun 3 23:20:03.922: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Jun 3 23:20:05.922: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Jun 3 23:20:07.922: INFO: The status of Pod back-off-cap is Running (Ready = true) STEP: getting restart delay when capped Jun 3 23:31:26.300: INFO: getRestartDelay: restartCount = 7, finishedAt=2022-06-03 23:26:13 +0000 UTC restartedAt=2022-06-03 23:31:25 +0000 UTC (5m12s) Jun 3 23:36:41.719: INFO: getRestartDelay: restartCount = 8, finishedAt=2022-06-03 23:31:30 +0000 UTC restartedAt=2022-06-03 23:36:40 +0000 UTC (5m10s) Jun 3 23:41:52.094: INFO: getRestartDelay: restartCount = 9, finishedAt=2022-06-03 23:36:45 +0000 UTC restartedAt=2022-06-03 23:41:50 +0000 UTC (5m5s) STEP: getting restart delay after a capped delay Jun 3 23:47:00.464: INFO: getRestartDelay: restartCount = 10, finishedAt=2022-06-03 23:41:55 +0000 UTC restartedAt=2022-06-03 23:46:59 +0000 UTC (5m4s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:47:00.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8652" for this suite. • [SLOW TEST:1622.593 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:722 ------------------------------ {"msg":"PASSED [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]","total":-1,"completed":6,"skipped":409,"failed":0} Jun 3 23:47:00.476: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it","total":-1,"completed":7,"skipped":1062,"failed":0} Jun 3 23:25:05.455: INFO: Running AfterSuite actions on all nodes Jun 3 23:47:00.518: INFO: Running AfterSuite actions on node 1 Jun 3 23:47:00.518: INFO: Skipping dumping logs from cluster Summarizing 1 Failure: [Panic!] [sig-node] Probing container [It] should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 Ran 53 of 5773 Specs in 1673.282 seconds FAIL! -- 52 Passed | 1 Failed | 0 Pending | 5720 Skipped Ginkgo ran 1 suite in 27m54.913078058s Test Suite Failed