Running Suite: Kubernetes e2e suite =================================== Random Seed: 1636775821 - Will randomize all specs Will run 5770 specs Running in parallel across 10 nodes Nov 13 03:57:03.033: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:57:03.038: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Nov 13 03:57:03.068: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 13 03:57:03.129: INFO: The status of Pod cmk-init-discover-node1-vkj2s is Succeeded, skipping waiting Nov 13 03:57:03.130: INFO: The status of Pod cmk-init-discover-node2-5f4hp is Succeeded, skipping waiting Nov 13 03:57:03.130: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 13 03:57:03.130: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Nov 13 03:57:03.130: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Nov 13 03:57:03.144: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Nov 13 03:57:03.144: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Nov 13 03:57:03.144: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Nov 13 03:57:03.144: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Nov 13 03:57:03.144: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Nov 13 03:57:03.144: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Nov 13 03:57:03.144: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Nov 13 03:57:03.144: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Nov 13 03:57:03.144: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Nov 13 03:57:03.144: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Nov 13 03:57:03.144: INFO: e2e test version: v1.21.5 Nov 13 03:57:03.146: INFO: kube-apiserver version: v1.21.1 Nov 13 03:57:03.146: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:57:03.153: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSS ------------------------------ Nov 13 03:57:03.158: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:57:03.180: INFO: Cluster IP family: ipv4 Nov 13 03:57:03.157: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:57:03.180: INFO: Cluster IP family: ipv4 S ------------------------------ Nov 13 03:57:03.158: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:57:03.182: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSS ------------------------------ Nov 13 03:57:03.168: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:57:03.189: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSS ------------------------------ Nov 13 03:57:03.181: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:57:03.201: INFO: Cluster IP family: ipv4 SSS ------------------------------ Nov 13 03:57:03.179: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:57:03.202: INFO: Cluster IP family: ipv4 Nov 13 03:57:03.178: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:57:03.202: INFO: Cluster IP family: ipv4 SSSSSSSSSSSS ------------------------------ Nov 13 03:57:03.187: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:57:03.207: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSS ------------------------------ Nov 13 03:57:03.192: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:57:03.215: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:57:03.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass W1113 03:57:03.702492 23 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 03:57:03.702: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 03:57:03.704: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should reject a Pod requesting a RuntimeClass with conflicting node selector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:41 [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:57:03.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-724" for this suite. •S ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector","total":-1,"completed":1,"skipped":202,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:57:03.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test W1113 03:57:03.230800 25 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 03:57:03.231: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 03:57:03.232: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 Nov 13 03:57:03.246: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-9428" to be "Succeeded or Failed" Nov 13 03:57:03.249: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131462ms Nov 13 03:57:05.253: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00613153s Nov 13 03:57:07.257: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010870103s Nov 13 03:57:09.261: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014233189s Nov 13 03:57:11.265: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018165179s Nov 13 03:57:13.269: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 10.022145553s Nov 13 03:57:15.273: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 12.026438809s Nov 13 03:57:17.276: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 14.029263726s Nov 13 03:57:19.280: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.033351804s Nov 13 03:57:19.280: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:57:19.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9428" for this suite. • [SLOW TEST:16.085 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":1,"skipped":0,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:57:19.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crictl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:33 Nov 13 03:57:19.548: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:57:19.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crictl-2091" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should be able to run crictl on the node [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:40 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:35 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:57:03.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-privileged-pod W1113 03:57:03.284024 32 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 03:57:03.284: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 03:57:03.285: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49 STEP: Creating a pod with a privileged container Nov 13 03:57:03.301: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:57:05.305: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:57:07.306: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:57:09.305: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:57:11.305: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:57:13.304: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:57:15.306: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:57:17.306: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:57:19.307: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:57:21.305: INFO: The status of Pod privileged-pod is Running (Ready = true) STEP: Executing in the privileged container Nov 13 03:57:21.308: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-1831 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:57:21.308: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:57:21.529: INFO: ExecWithOptions {Command:[ip link del dummy1] Namespace:e2e-privileged-pod-1831 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:57:21.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Executing in the non-privileged container Nov 13 03:57:21.896: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-1831 PodName:privileged-pod ContainerName:not-privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:57:21.896: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:57:21.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-privileged-pod-1831" for this suite. • [SLOW TEST:18.738 seconds] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49 ------------------------------ {"msg":"PASSED [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":1,"skipped":21,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:57:03.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api W1113 03:57:03.560399 35 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 03:57:03.560: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 03:57:03.562: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109 STEP: Creating a pod to test downward api env vars Nov 13 03:57:03.575: INFO: Waiting up to 5m0s for pod "downward-api-7b9a2fec-2fcc-418c-84ab-f1d0162b449f" in namespace "downward-api-3042" to be "Succeeded or Failed" Nov 13 03:57:03.578: INFO: Pod "downward-api-7b9a2fec-2fcc-418c-84ab-f1d0162b449f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.369296ms Nov 13 03:57:05.581: INFO: Pod "downward-api-7b9a2fec-2fcc-418c-84ab-f1d0162b449f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006283904s Nov 13 03:57:07.587: INFO: Pod "downward-api-7b9a2fec-2fcc-418c-84ab-f1d0162b449f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011771716s Nov 13 03:57:09.593: INFO: Pod "downward-api-7b9a2fec-2fcc-418c-84ab-f1d0162b449f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017527763s Nov 13 03:57:11.595: INFO: Pod "downward-api-7b9a2fec-2fcc-418c-84ab-f1d0162b449f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020240129s Nov 13 03:57:13.600: INFO: Pod "downward-api-7b9a2fec-2fcc-418c-84ab-f1d0162b449f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.024691328s Nov 13 03:57:15.608: INFO: Pod "downward-api-7b9a2fec-2fcc-418c-84ab-f1d0162b449f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.033093716s Nov 13 03:57:17.613: INFO: Pod "downward-api-7b9a2fec-2fcc-418c-84ab-f1d0162b449f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.038085004s Nov 13 03:57:19.618: INFO: Pod "downward-api-7b9a2fec-2fcc-418c-84ab-f1d0162b449f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.042512917s Nov 13 03:57:21.621: INFO: Pod "downward-api-7b9a2fec-2fcc-418c-84ab-f1d0162b449f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.046208397s Nov 13 03:57:23.625: INFO: Pod "downward-api-7b9a2fec-2fcc-418c-84ab-f1d0162b449f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.049570917s STEP: Saw pod success Nov 13 03:57:23.625: INFO: Pod "downward-api-7b9a2fec-2fcc-418c-84ab-f1d0162b449f" satisfied condition "Succeeded or Failed" Nov 13 03:57:23.628: INFO: Trying to get logs from node node1 pod downward-api-7b9a2fec-2fcc-418c-84ab-f1d0162b449f container dapi-container: STEP: delete the pod Nov 13 03:57:23.642: INFO: Waiting for pod downward-api-7b9a2fec-2fcc-418c-84ab-f1d0162b449f to disappear Nov 13 03:57:23.644: INFO: Pod downward-api-7b9a2fec-2fcc-418c-84ab-f1d0162b449f no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:57:23.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3042" for this suite. • [SLOW TEST:20.116 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":1,"skipped":106,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:57:03.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 Nov 13 03:57:03.903: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-221581f0-87f4-46c5-8f70-af92ab00e44e" in namespace "security-context-test-730" to be "Succeeded or Failed" Nov 13 03:57:03.906: INFO: Pod "alpine-nnp-nil-221581f0-87f4-46c5-8f70-af92ab00e44e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.946946ms Nov 13 03:57:05.911: INFO: Pod "alpine-nnp-nil-221581f0-87f4-46c5-8f70-af92ab00e44e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007208369s Nov 13 03:57:07.918: INFO: Pod "alpine-nnp-nil-221581f0-87f4-46c5-8f70-af92ab00e44e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014319197s Nov 13 03:57:09.923: INFO: Pod "alpine-nnp-nil-221581f0-87f4-46c5-8f70-af92ab00e44e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.020080681s Nov 13 03:57:11.927: INFO: Pod "alpine-nnp-nil-221581f0-87f4-46c5-8f70-af92ab00e44e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.024067499s Nov 13 03:57:13.932: INFO: Pod "alpine-nnp-nil-221581f0-87f4-46c5-8f70-af92ab00e44e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.028810596s Nov 13 03:57:15.940: INFO: Pod "alpine-nnp-nil-221581f0-87f4-46c5-8f70-af92ab00e44e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.036978918s Nov 13 03:57:17.948: INFO: Pod "alpine-nnp-nil-221581f0-87f4-46c5-8f70-af92ab00e44e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.044845635s Nov 13 03:57:19.955: INFO: Pod "alpine-nnp-nil-221581f0-87f4-46c5-8f70-af92ab00e44e": Phase="Pending", Reason="", readiness=false. Elapsed: 16.051272234s Nov 13 03:57:21.958: INFO: Pod "alpine-nnp-nil-221581f0-87f4-46c5-8f70-af92ab00e44e": Phase="Pending", Reason="", readiness=false. Elapsed: 18.054656613s Nov 13 03:57:23.961: INFO: Pod "alpine-nnp-nil-221581f0-87f4-46c5-8f70-af92ab00e44e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.058039342s Nov 13 03:57:23.961: INFO: Pod "alpine-nnp-nil-221581f0-87f4-46c5-8f70-af92ab00e44e" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:57:23.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-730" for this suite. • [SLOW TEST:20.104 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":277,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:57:04.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime W1113 03:57:04.229238 26 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 03:57:04.229: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 03:57:04.231: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Nov 13 03:57:24.316: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:57:24.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8064" for this suite. • [SLOW TEST:20.125 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":1,"skipped":470,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:57:24.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:57:32.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7995" for this suite. • [SLOW TEST:8.104 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":3,"skipped":291,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:57:20.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:477 STEP: Creating pod startup-override-6ce5963c-4ed4-4419-8eeb-eb9f3c0c2882 in namespace container-probe-8439 Nov 13 03:57:30.308: INFO: Started pod startup-override-6ce5963c-4ed4-4419-8eeb-eb9f3c0c2882 in namespace container-probe-8439 STEP: checking the pod's current state and verifying that restartCount is present Nov 13 03:57:30.310: INFO: Initial restart count of pod startup-override-6ce5963c-4ed4-4419-8eeb-eb9f3c0c2882 is 0 Nov 13 03:57:34.319: INFO: Restart count of pod container-probe-8439/startup-override-6ce5963c-4ed4-4419-8eeb-eb9f3c0c2882 is now 1 (4.008383777s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:57:34.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8439" for this suite. • [SLOW TEST:14.068 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:477 ------------------------------ {"msg":"PASSED [sig-node] Probing container should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod]","total":-1,"completed":2,"skipped":507,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:57:34.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] should have OwnerReferences set /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:88 [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:57:34.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-3989" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":3,"skipped":696,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:57:23.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Nov 13 03:57:23.874: INFO: Waiting up to 5m0s for pod "security-context-efa33165-22b3-4de3-a449-b907919c5ec3" in namespace "security-context-9586" to be "Succeeded or Failed" Nov 13 03:57:23.876: INFO: Pod "security-context-efa33165-22b3-4de3-a449-b907919c5ec3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023707ms Nov 13 03:57:25.878: INFO: Pod "security-context-efa33165-22b3-4de3-a449-b907919c5ec3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00455119s Nov 13 03:57:27.884: INFO: Pod "security-context-efa33165-22b3-4de3-a449-b907919c5ec3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009637383s Nov 13 03:57:29.886: INFO: Pod "security-context-efa33165-22b3-4de3-a449-b907919c5ec3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01207945s Nov 13 03:57:31.890: INFO: Pod "security-context-efa33165-22b3-4de3-a449-b907919c5ec3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016175263s Nov 13 03:57:33.893: INFO: Pod "security-context-efa33165-22b3-4de3-a449-b907919c5ec3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.018603216s Nov 13 03:57:35.897: INFO: Pod "security-context-efa33165-22b3-4de3-a449-b907919c5ec3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.022734517s STEP: Saw pod success Nov 13 03:57:35.897: INFO: Pod "security-context-efa33165-22b3-4de3-a449-b907919c5ec3" satisfied condition "Succeeded or Failed" Nov 13 03:57:35.900: INFO: Trying to get logs from node node1 pod security-context-efa33165-22b3-4de3-a449-b907919c5ec3 container test-container: STEP: delete the pod Nov 13 03:57:35.912: INFO: Waiting for pod security-context-efa33165-22b3-4de3-a449-b907919c5ec3 to disappear Nov 13 03:57:35.914: INFO: Pod security-context-efa33165-22b3-4de3-a449-b907919c5ec3 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:57:35.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-9586" for this suite. • [SLOW TEST:12.081 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":2,"skipped":198,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:57:24.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:53 [It] should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Nov 13 03:57:35.454: INFO: start=2021-11-13 03:57:30.414691329 +0000 UTC m=+28.999576336, now=2021-11-13 03:57:35.454547047 +0000 UTC m=+34.039432037, kubelet pod: {"metadata":{"name":"pod-submit-remove-15f0a472-18f6-472a-87f1-a17faed5fef5","namespace":"pods-8989","uid":"f0ea24bc-c804-44c4-96d2-5ab7e32bb1d8","resourceVersion":"151625","creationTimestamp":"2021-11-13T03:57:24Z","deletionTimestamp":"2021-11-13T03:58:00Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"385435922"},"annotations":{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.225\"\n ],\n \"mac\": \"8e:83:73:9d:6b:72\",\n \"default\": true,\n \"dns\": {}\n}]","k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.225\"\n ],\n \"mac\": \"8e:83:73:9d:6b:72\",\n \"default\": true,\n \"dns\": {}\n}]","kubernetes.io/config.seen":"2021-11-13T03:57:24.395437731Z","kubernetes.io/config.source":"api","kubernetes.io/psp":"collectd"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-11-13T03:57:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"kube-api-access-twjrx","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","args":["pause"],"resources":{},"volumeMounts":[{"name":"kube-api-access-twjrx","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"node2","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-11-13T03:57:24Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-11-13T03:57:29Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-11-13T03:57:29Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-11-13T03:57:24Z"}],"hostIP":"10.10.190.208","podIP":"10.244.4.225","podIPs":[{"ip":"10.244.4.225"}],"startTime":"2021-11-13T03:57:24Z","containerStatuses":[{"name":"agnhost-container","state":{"running":{"startedAt":"2021-11-13T03:57:28Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","imageID":"docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1","containerID":"docker://822e249d96e2e97774714bfccdb6efb160a400c05ce8a83f274017382a4bee6d","started":true}],"qosClass":"BestEffort"}} Nov 13 03:57:40.434: INFO: start=2021-11-13 03:57:30.414691329 +0000 UTC m=+28.999576336, now=2021-11-13 03:57:40.434698784 +0000 UTC m=+39.019583908, kubelet pod: {"metadata":{"name":"pod-submit-remove-15f0a472-18f6-472a-87f1-a17faed5fef5","namespace":"pods-8989","uid":"f0ea24bc-c804-44c4-96d2-5ab7e32bb1d8","resourceVersion":"151625","creationTimestamp":"2021-11-13T03:57:24Z","deletionTimestamp":"2021-11-13T03:58:00Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"385435922"},"annotations":{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.225\"\n ],\n \"mac\": \"8e:83:73:9d:6b:72\",\n \"default\": true,\n \"dns\": {}\n}]","k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.225\"\n ],\n \"mac\": \"8e:83:73:9d:6b:72\",\n \"default\": true,\n \"dns\": {}\n}]","kubernetes.io/config.seen":"2021-11-13T03:57:24.395437731Z","kubernetes.io/config.source":"api","kubernetes.io/psp":"collectd"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-11-13T03:57:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"kube-api-access-twjrx","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","args":["pause"],"resources":{},"volumeMounts":[{"name":"kube-api-access-twjrx","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"node2","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-11-13T03:57:24Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-11-13T03:57:35Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-11-13T03:57:35Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-11-13T03:57:24Z"}],"hostIP":"10.10.190.208","startTime":"2021-11-13T03:57:24Z","containerStatuses":[{"name":"agnhost-container","state":{"waiting":{"reason":"ContainerCreating"}},"lastState":{"terminated":{"exitCode":137,"reason":"ContainerStatusUnknown","message":"The container could not be located when the pod was deleted. The container used to be Running","startedAt":null,"finishedAt":null}},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","imageID":"","started":false}],"qosClass":"BestEffort"}} Nov 13 03:57:45.436: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:57:45.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8989" for this suite. • [SLOW TEST:21.083 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:51 should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Delete Grace Period should be submitted and removed","total":-1,"completed":2,"skipped":484,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:57:03.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet W1113 03:57:03.549912 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 03:57:03.550: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 03:57:03.551: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:274 [BeforeEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:295 [It] kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 STEP: Creating a RC of 20 pods and wait until all pods of this RC are running STEP: creating replication controller cleanup20-aaf61f16-2b2b-4665-8882-6d7c2967b7db in namespace kubelet-8733 I1113 03:57:03.590512 28 runners.go:190] Created replication controller with name: cleanup20-aaf61f16-2b2b-4665-8882-6d7c2967b7db, namespace: kubelet-8733, replica count: 20 I1113 03:57:13.641840 28 runners.go:190] cleanup20-aaf61f16-2b2b-4665-8882-6d7c2967b7db Pods: 20 out of 20 created, 0 running, 20 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 03:57:23.642784 28 runners.go:190] cleanup20-aaf61f16-2b2b-4665-8882-6d7c2967b7db Pods: 20 out of 20 created, 17 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 03:57:33.643785 28 runners.go:190] cleanup20-aaf61f16-2b2b-4665-8882-6d7c2967b7db Pods: 20 out of 20 created, 20 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 13 03:57:34.644: INFO: Checking pods on node node2 via /runningpods endpoint Nov 13 03:57:34.644: INFO: Checking pods on node node1 via /runningpods endpoint Nov 13 03:57:34.675: INFO: Resource usage on node "master2": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.324 3744.90 1543.40 "runtime" 0.113 616.13 250.64 "kubelet" 0.113 616.13 250.64 Resource usage on node "master3": container cpu(cores) memory_working_set(MB) memory_rss(MB) "runtime" 0.111 537.04 243.85 "kubelet" 0.111 537.04 243.85 "/" 0.599 4078.56 1715.84 Resource usage on node "node1": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 1.884 6692.10 2496.56 "runtime" 1.081 2612.36 548.39 "kubelet" 1.081 2612.36 548.39 Resource usage on node "node2": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 1.642 4219.48 1193.45 "runtime" 1.315 1623.31 566.25 "kubelet" 1.315 1623.31 566.25 Resource usage on node "master1": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.490 5040.73 1704.15 "runtime" 0.121 673.38 290.41 "kubelet" 0.121 673.38 290.41 STEP: Deleting the RC STEP: deleting ReplicationController cleanup20-aaf61f16-2b2b-4665-8882-6d7c2967b7db in namespace kubelet-8733, will wait for the garbage collector to delete the pods Nov 13 03:57:34.732: INFO: Deleting ReplicationController cleanup20-aaf61f16-2b2b-4665-8882-6d7c2967b7db took: 4.085474ms Nov 13 03:57:35.333: INFO: Terminating ReplicationController cleanup20-aaf61f16-2b2b-4665-8882-6d7c2967b7db pods took: 600.419769ms Nov 13 03:57:52.734: INFO: Checking pods on node node2 via /runningpods endpoint Nov 13 03:57:52.734: INFO: Checking pods on node node1 via /runningpods endpoint Nov 13 03:57:52.960: INFO: Deleting 20 pods on 2 nodes completed in 1.226122938s after the RC was deleted Nov 13 03:57:52.960: INFO: CPU usage of containers on node "master1" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.331 0.423 0.438 0.490 0.490 0.490 "runtime" 0.000 0.000 0.121 0.126 0.126 0.126 0.126 "kubelet" 0.000 0.000 0.121 0.126 0.126 0.126 0.126 CPU usage of containers on node "master2" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.281 0.283 0.292 0.324 0.324 0.324 "runtime" 0.000 0.000 0.104 0.104 0.113 0.113 0.113 "kubelet" 0.000 0.000 0.104 0.104 0.113 0.113 0.113 CPU usage of containers on node "master3" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 0.457 0.457 0.511 0.511 0.511 "runtime" 0.000 0.000 0.082 0.095 0.095 0.095 0.095 "kubelet" 0.000 0.000 0.082 0.095 0.095 0.095 0.095 CPU usage of containers on node "node1" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 1.884 1.884 1.997 1.997 1.997 "runtime" 0.000 0.000 0.849 0.849 0.849 0.849 0.849 "kubelet" 0.000 0.000 0.849 0.849 0.849 0.849 0.849 CPU usage of containers on node "node2" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 1.396 1.396 1.642 1.642 1.642 "runtime" 0.000 0.000 0.582 0.582 0.741 0.741 0.741 "kubelet" 0.000 0.000 0.582 0.582 0.741 0.741 0.741 [AfterEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:326 STEP: removing the label kubelet_cleanup off the node node1 STEP: verifying the node doesn't have the label kubelet_cleanup STEP: removing the label kubelet_cleanup off the node node2 STEP: verifying the node doesn't have the label kubelet_cleanup [AfterEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:57:52.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-8733" for this suite. • [SLOW TEST:49.465 seconds] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:279 kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 ------------------------------ {"msg":"PASSED [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":1,"skipped":105,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:57:32.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Nov 13 03:57:32.232: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 STEP: creating secret and pod Nov 13 03:57:32.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-1372 create -f -' Nov 13 03:57:32.739: INFO: stderr: "" Nov 13 03:57:32.739: INFO: stdout: "secret/test-secret created\n" Nov 13 03:57:32.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-1372 create -f -' Nov 13 03:57:33.077: INFO: stderr: "" Nov 13 03:57:33.077: INFO: stdout: "pod/secret-test-pod created\n" STEP: checking if secret was read correctly Nov 13 03:57:53.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-1372 logs secret-test-pod test-container' Nov 13 03:57:53.259: INFO: stderr: "" Nov 13 03:57:53.259: INFO: stdout: "content of file \"/etc/secret-volume/data-1\": value-1\r\n\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:57:53.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-1372" for this suite. • [SLOW TEST:21.066 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:113 should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Secret should create a pod that reads a secret","total":-1,"completed":4,"skipped":339,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:57:35.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:446 [It] should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 STEP: creating the pod that should always exit 0 STEP: submitting the pod to kubernetes Nov 13 03:57:35.965: INFO: Waiting up to 5m0s for pod "pod-always-succeede0b17bb5-1007-4efe-ac3f-4027974f0b44" in namespace "pods-9930" to be "Succeeded or Failed" Nov 13 03:57:35.967: INFO: Pod "pod-always-succeede0b17bb5-1007-4efe-ac3f-4027974f0b44": Phase="Pending", Reason="", readiness=false. Elapsed: 1.882107ms Nov 13 03:57:37.972: INFO: Pod "pod-always-succeede0b17bb5-1007-4efe-ac3f-4027974f0b44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006616788s Nov 13 03:57:39.981: INFO: Pod "pod-always-succeede0b17bb5-1007-4efe-ac3f-4027974f0b44": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015913353s Nov 13 03:57:41.984: INFO: Pod "pod-always-succeede0b17bb5-1007-4efe-ac3f-4027974f0b44": Phase="Pending", Reason="", readiness=false. Elapsed: 6.018633768s Nov 13 03:57:43.986: INFO: Pod "pod-always-succeede0b17bb5-1007-4efe-ac3f-4027974f0b44": Phase="Pending", Reason="", readiness=false. Elapsed: 8.02124234s Nov 13 03:57:45.990: INFO: Pod "pod-always-succeede0b17bb5-1007-4efe-ac3f-4027974f0b44": Phase="Pending", Reason="", readiness=false. Elapsed: 10.025507187s Nov 13 03:57:47.996: INFO: Pod "pod-always-succeede0b17bb5-1007-4efe-ac3f-4027974f0b44": Phase="Pending", Reason="", readiness=false. Elapsed: 12.030681728s Nov 13 03:57:50.003: INFO: Pod "pod-always-succeede0b17bb5-1007-4efe-ac3f-4027974f0b44": Phase="Pending", Reason="", readiness=false. Elapsed: 14.038048712s Nov 13 03:57:52.007: INFO: Pod "pod-always-succeede0b17bb5-1007-4efe-ac3f-4027974f0b44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.041725316s STEP: Saw pod success Nov 13 03:57:52.007: INFO: Pod "pod-always-succeede0b17bb5-1007-4efe-ac3f-4027974f0b44" satisfied condition "Succeeded or Failed" STEP: Getting events about the pod STEP: Checking events about the pod STEP: deleting the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:57:54.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9930" for this suite. • [SLOW TEST:18.094 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:444 should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done","total":-1,"completed":3,"skipped":203,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:57:53.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 Nov 13 03:57:53.386: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-03115e8f-c0ba-41b2-a255-1f961f761d14" in namespace "security-context-test-4556" to be "Succeeded or Failed" Nov 13 03:57:53.388: INFO: Pod "busybox-readonly-true-03115e8f-c0ba-41b2-a255-1f961f761d14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102474ms Nov 13 03:57:55.391: INFO: Pod "busybox-readonly-true-03115e8f-c0ba-41b2-a255-1f961f761d14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004848997s Nov 13 03:57:57.393: INFO: Pod "busybox-readonly-true-03115e8f-c0ba-41b2-a255-1f961f761d14": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007332054s Nov 13 03:57:59.398: INFO: Pod "busybox-readonly-true-03115e8f-c0ba-41b2-a255-1f961f761d14": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012401234s Nov 13 03:58:01.403: INFO: Pod "busybox-readonly-true-03115e8f-c0ba-41b2-a255-1f961f761d14": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017208722s Nov 13 03:58:03.407: INFO: Pod "busybox-readonly-true-03115e8f-c0ba-41b2-a255-1f961f761d14": Phase="Pending", Reason="", readiness=false. Elapsed: 10.020723193s Nov 13 03:58:05.411: INFO: Pod "busybox-readonly-true-03115e8f-c0ba-41b2-a255-1f961f761d14": Phase="Failed", Reason="", readiness=false. Elapsed: 12.024792758s Nov 13 03:58:05.411: INFO: Pod "busybox-readonly-true-03115e8f-c0ba-41b2-a255-1f961f761d14" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:58:05.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4556" for this suite. • [SLOW TEST:12.071 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with readOnlyRootFilesystem /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171 should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":282,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:57:53.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367 Nov 13 03:57:53.488: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-f5d1609b-7eaa-4fd0-8249-236a608b86d3" in namespace "security-context-test-2259" to be "Succeeded or Failed" Nov 13 03:57:53.491: INFO: Pod "alpine-nnp-true-f5d1609b-7eaa-4fd0-8249-236a608b86d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.934769ms Nov 13 03:57:55.495: INFO: Pod "alpine-nnp-true-f5d1609b-7eaa-4fd0-8249-236a608b86d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006886441s Nov 13 03:57:57.498: INFO: Pod "alpine-nnp-true-f5d1609b-7eaa-4fd0-8249-236a608b86d3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010655807s Nov 13 03:57:59.503: INFO: Pod "alpine-nnp-true-f5d1609b-7eaa-4fd0-8249-236a608b86d3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015636949s Nov 13 03:58:01.507: INFO: Pod "alpine-nnp-true-f5d1609b-7eaa-4fd0-8249-236a608b86d3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019530587s Nov 13 03:58:03.511: INFO: Pod "alpine-nnp-true-f5d1609b-7eaa-4fd0-8249-236a608b86d3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.023671613s Nov 13 03:58:05.516: INFO: Pod "alpine-nnp-true-f5d1609b-7eaa-4fd0-8249-236a608b86d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.027984342s Nov 13 03:58:05.516: INFO: Pod "alpine-nnp-true-f5d1609b-7eaa-4fd0-8249-236a608b86d3" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:58:05.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2259" for this suite. • [SLOW TEST:12.072 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":5,"skipped":433,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:57:54.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should run a Pod requesting a RuntimeClass with scheduling without taints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:125 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a label on the found node. STEP: verifying the node has the label foo-98ff3086-abfc-45e1-aed4-6bc81be34cd6 bar STEP: verifying the node has the label fizz-cf588587-ed72-4994-a5c8-ae60a9e57f5a buzz STEP: Trying to create runtimeclass and pod STEP: removing the label fizz-cf588587-ed72-4994-a5c8-ae60a9e57f5a off the node node2 STEP: verifying the node doesn't have the label fizz-cf588587-ed72-4994-a5c8-ae60a9e57f5a STEP: removing the label foo-98ff3086-abfc-45e1-aed4-6bc81be34cd6 off the node node2 STEP: verifying the node doesn't have the label foo-98ff3086-abfc-45e1-aed4-6bc81be34cd6 [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:58:14.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-4126" for this suite. • [SLOW TEST:20.130 seconds] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should run a Pod requesting a RuntimeClass with scheduling without taints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:125 ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling without taints ","total":-1,"completed":4,"skipped":222,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:57:22.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:371 STEP: Creating pod startup-59153622-1caf-4257-8c2d-8178a6c07cca in namespace container-probe-4192 Nov 13 03:57:34.170: INFO: Started pod startup-59153622-1caf-4257-8c2d-8178a6c07cca in namespace container-probe-4192 STEP: checking the pod's current state and verifying that restartCount is present Nov 13 03:57:34.173: INFO: Initial restart count of pod startup-59153622-1caf-4257-8c2d-8178a6c07cca is 0 Nov 13 03:58:24.291: INFO: Restart count of pod container-probe-4192/startup-59153622-1caf-4257-8c2d-8178a6c07cca is now 1 (50.117911361s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:58:24.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4192" for this suite. • [SLOW TEST:62.174 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:371 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted by liveness probe after startup probe enables it","total":-1,"completed":2,"skipped":90,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:57:03.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W1113 03:57:03.607487 33 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 03:57:03.607: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 03:57:03.609: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:313 STEP: Creating pod startup-8c59139d-0876-4f0d-bc7d-faf22952107b in namespace container-probe-788 Nov 13 03:57:19.629: INFO: Started pod startup-8c59139d-0876-4f0d-bc7d-faf22952107b in namespace container-probe-788 STEP: checking the pod's current state and verifying that restartCount is present Nov 13 03:57:19.631: INFO: Initial restart count of pod startup-8c59139d-0876-4f0d-bc7d-faf22952107b is 0 Nov 13 03:58:25.765: INFO: Restart count of pod container-probe-788/startup-8c59139d-0876-4f0d-bc7d-faf22952107b is now 1 (1m6.134314899s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:58:25.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-788" for this suite. • [SLOW TEST:82.196 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:313 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted startup probe fails","total":-1,"completed":1,"skipped":131,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:58:24.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 STEP: create image pull secret STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:58:32.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6297" for this suite. • [SLOW TEST:8.095 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":3,"skipped":269,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:58:32.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should update ConfigMap successfully /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/configmap.go:140 STEP: Creating ConfigMap configmap-4778/configmap-test-8f2a0d9c-fa82-48bd-a8e0-5ff488b8f015 STEP: Updating configMap configmap-4778/configmap-test-8f2a0d9c-fa82-48bd-a8e0-5ff488b8f015 STEP: Verifying update of ConfigMap configmap-4778/configmap-test-8f2a0d9c-fa82-48bd-a8e0-5ff488b8f015 [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:58:32.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4778" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":4,"skipped":276,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:58:25.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Nov 13 03:58:26.016: INFO: Waiting up to 5m0s for pod "security-context-19288541-00a8-4120-9a18-c29f6dafb860" in namespace "security-context-9078" to be "Succeeded or Failed" Nov 13 03:58:26.019: INFO: Pod "security-context-19288541-00a8-4120-9a18-c29f6dafb860": Phase="Pending", Reason="", readiness=false. Elapsed: 3.098939ms Nov 13 03:58:28.022: INFO: Pod "security-context-19288541-00a8-4120-9a18-c29f6dafb860": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005808312s Nov 13 03:58:30.027: INFO: Pod "security-context-19288541-00a8-4120-9a18-c29f6dafb860": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011145426s Nov 13 03:58:32.031: INFO: Pod "security-context-19288541-00a8-4120-9a18-c29f6dafb860": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014599358s Nov 13 03:58:34.034: INFO: Pod "security-context-19288541-00a8-4120-9a18-c29f6dafb860": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.017577754s STEP: Saw pod success Nov 13 03:58:34.034: INFO: Pod "security-context-19288541-00a8-4120-9a18-c29f6dafb860" satisfied condition "Succeeded or Failed" Nov 13 03:58:34.037: INFO: Trying to get logs from node node2 pod security-context-19288541-00a8-4120-9a18-c29f6dafb860 container test-container: STEP: delete the pod Nov 13 03:58:34.049: INFO: Waiting for pod security-context-19288541-00a8-4120-9a18-c29f6dafb860 to disappear Nov 13 03:58:34.051: INFO: Pod security-context-19288541-00a8-4120-9a18-c29f6dafb860 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:58:34.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-9078" for this suite. • [SLOW TEST:8.081 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":2,"skipped":230,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:58:32.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:400 E1113 03:58:34.993326 32 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 242 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x653b640, 0x9beb6a0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86 panic(0x653b640, 0x9beb6a0) /usr/local/go/src/runtime/panic.go:965 +0x1b9 k8s.io/kubernetes/test/e2e/framework/pod.podContainerStarted.func1(0xc0015b8f0c, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/resource.go:334 +0x17b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc003a98500, 0xc0015b8f00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitFor(0xc0004dde78, 0xc003a98500, 0xc001fbb9e0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:541 +0x128 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollInternal(0xc0004dde78, 0xc003a98500, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:427 +0x87 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0004dde78, 0xc003a98500, 0xc0004dde78, 0xc003a98500) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:452 +0x74 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc003a98500, 0x14, 0xc004aa0ba0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodContainerStarted(0x779f8f8, 0xc001fc3ce0, 0xc0027a19e0, 0x14, 0xc004aa0ba0, 0x2c, 0x0, 0x45d964b800, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/wait.go:554 +0x92 k8s.io/kubernetes/test/e2e/common/node.glob..func2.18() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:417 +0x39e k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc000fe2a20, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc000fe2a20, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc000ffa2c0, 0x768f9a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc0029c7e00, 0x0, 0x768f9a0, 0xc000238800) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:215 +0x72f k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc0029c7e00, 0x768f9a0, 0xc000238800) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc00047c500, 0xc0029c7e00, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc00047c500, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x147 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc00047c500, 0xc004b8eb58) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000174280, 0x7f3dfe3285d0, 0xc001e01b00, 0x6f05d9d, 0x14, 0xc0039fb8f0, 0x3, 0x3, 0x7745ab8, 0xc000238800, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x7694a60, 0xc001e01b00, 0x6f05d9d, 0x14, 0xc003f0a300, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x218 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x7694a60, 0xc001e01b00, 0x6f05d9d, 0x14, 0xc003d54380, 0x2, 0x2, 0x25) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001e01b00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001e01b00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001e01b00, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "container-probe-5182". STEP: Found 4 events. Nov 13 03:58:34.996: INFO: At 2021-11-13 03:58:32 +0000 UTC - event for startup-d02f7247-cc40-49a0-aee3-8a36eedb0ca7: {default-scheduler } Scheduled: Successfully assigned container-probe-5182/startup-d02f7247-cc40-49a0-aee3-8a36eedb0ca7 to node2 Nov 13 03:58:34.996: INFO: At 2021-11-13 03:58:34 +0000 UTC - event for startup-d02f7247-cc40-49a0-aee3-8a36eedb0ca7: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" Nov 13 03:58:34.996: INFO: At 2021-11-13 03:58:34 +0000 UTC - event for startup-d02f7247-cc40-49a0-aee3-8a36eedb0ca7: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" in 298.465542ms Nov 13 03:58:34.996: INFO: At 2021-11-13 03:58:34 +0000 UTC - event for startup-d02f7247-cc40-49a0-aee3-8a36eedb0ca7: {kubelet node2} Created: Created container busybox Nov 13 03:58:34.998: INFO: POD NODE PHASE GRACE CONDITIONS Nov 13 03:58:34.999: INFO: startup-d02f7247-cc40-49a0-aee3-8a36eedb0ca7 node2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:58:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:58:32 +0000 UTC ContainersNotReady containers with unready status: [busybox]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:58:32 +0000 UTC ContainersNotReady containers with unready status: [busybox]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:58:32 +0000 UTC }] Nov 13 03:58:34.999: INFO: Nov 13 03:58:35.005: INFO: Logging node info for node master1 Nov 13 03:58:35.009: INFO: Node Info: &Node{ObjectMeta:{master1 56d66c54-e52b-494a-a758-e4b658c4b245 152936 0 2021-11-12 21:05:50 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-12 21:05:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-12 21:13:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:25 +0000 UTC,LastTransitionTime:2021-11-12 21:11:25 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 03:58:34 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 03:58:34 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 03:58:34 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 03:58:34 +0000 UTC,LastTransitionTime:2021-11-12 21:11:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:94e600d00e79450a9fb632d8473a11eb,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:6e4bb815-8b93-47c2-9321-93e7ada261f6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:93c03fd0e56363624f0df9368752e6e7c270d969da058ae5066fe8d668541f51 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:57d1a39684ee5a5b5d34638cff843561d440d0f996303b2e841cabf228a4c2af nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 03:58:35.009: INFO: Logging kubelet events for node master1 Nov 13 03:58:35.012: INFO: Logging pods the kubelet thinks is on node master1 Nov 13 03:58:35.034: INFO: kube-apiserver-master1 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.034: INFO: Container kube-apiserver ready: true, restart count 0 Nov 13 03:58:35.034: INFO: kube-proxy-6m7qt started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.034: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 03:58:35.034: INFO: container-registry-65d7c44b96-qwqcz started at 2021-11-12 21:12:56 +0000 UTC (0+2 container statuses recorded) Nov 13 03:58:35.034: INFO: Container docker-registry ready: true, restart count 0 Nov 13 03:58:35.034: INFO: Container nginx ready: true, restart count 0 Nov 13 03:58:35.034: INFO: kube-controller-manager-master1 started at 2021-11-12 21:15:21 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.034: INFO: Container kube-controller-manager ready: true, restart count 2 Nov 13 03:58:35.034: INFO: kube-flannel-79bvx started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 03:58:35.034: INFO: Init container install-cni ready: true, restart count 0 Nov 13 03:58:35.034: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 03:58:35.034: INFO: kube-multus-ds-amd64-qtmwl started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.034: INFO: Container kube-multus ready: true, restart count 1 Nov 13 03:58:35.034: INFO: coredns-8474476ff8-9vc8b started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.034: INFO: Container coredns ready: true, restart count 2 Nov 13 03:58:35.034: INFO: node-exporter-zm5hq started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 03:58:35.034: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 03:58:35.034: INFO: Container node-exporter ready: true, restart count 0 Nov 13 03:58:35.034: INFO: kube-scheduler-master1 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.034: INFO: Container kube-scheduler ready: true, restart count 0 W1113 03:58:35.047233 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 03:58:35.120: INFO: Latency metrics for node master1 Nov 13 03:58:35.120: INFO: Logging node info for node master2 Nov 13 03:58:35.122: INFO: Node Info: &Node{ObjectMeta:{master2 9cc6c106-2749-4b3a-bbe2-d8a672ab49e0 152875 0 2021-11-12 21:06:20 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-12 21:06:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-11-12 21:16:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-11-12 21:16:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:30 +0000 UTC,LastTransitionTime:2021-11-12 21:11:30 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 03:58:31 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 03:58:31 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 03:58:31 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 03:58:31 +0000 UTC,LastTransitionTime:2021-11-12 21:08:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:65d51a0e6dc44ad1ac5d3b5cd37365f1,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:728abaee-0c5e-4ddb-a22e-72a1345c5ab6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 03:58:35.123: INFO: Logging kubelet events for node master2 Nov 13 03:58:35.125: INFO: Logging pods the kubelet thinks is on node master2 Nov 13 03:58:35.134: INFO: kube-scheduler-master2 started at 2021-11-12 21:15:21 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.134: INFO: Container kube-scheduler ready: true, restart count 2 Nov 13 03:58:35.134: INFO: kube-proxy-5xbt9 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.134: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 03:58:35.134: INFO: kube-flannel-x76f4 started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 03:58:35.134: INFO: Init container install-cni ready: true, restart count 0 Nov 13 03:58:35.134: INFO: Container kube-flannel ready: true, restart count 1 Nov 13 03:58:35.134: INFO: kube-multus-ds-amd64-8zzgk started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.134: INFO: Container kube-multus ready: true, restart count 1 Nov 13 03:58:35.134: INFO: coredns-8474476ff8-s7twh started at 2021-11-12 21:09:11 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.134: INFO: Container coredns ready: true, restart count 1 Nov 13 03:58:35.134: INFO: node-exporter-clpwc started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 03:58:35.134: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 03:58:35.134: INFO: Container node-exporter ready: true, restart count 0 Nov 13 03:58:35.134: INFO: kube-controller-manager-master2 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.134: INFO: Container kube-controller-manager ready: true, restart count 2 Nov 13 03:58:35.134: INFO: node-feature-discovery-controller-cff799f9f-c54h8 started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.134: INFO: Container nfd-controller ready: true, restart count 0 Nov 13 03:58:35.134: INFO: kube-apiserver-master2 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.134: INFO: Container kube-apiserver ready: true, restart count 0 W1113 03:58:35.148262 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 03:58:35.220: INFO: Latency metrics for node master2 Nov 13 03:58:35.220: INFO: Logging node info for node master3 Nov 13 03:58:35.222: INFO: Node Info: &Node{ObjectMeta:{master3 fce0cd54-e4d8-4ce1-b720-522aad2d7989 152879 0 2021-11-12 21:06:31 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-12 21:06:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-12 21:19:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:26 +0000 UTC,LastTransitionTime:2021-11-12 21:11:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 03:58:32 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 03:58:32 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 03:58:32 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 03:58:32 +0000 UTC,LastTransitionTime:2021-11-12 21:11:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:592c271b4697499588d9c2b3767b866a,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:a95de4ca-c566-4b34-8463-623af932d416,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 03:58:35.222: INFO: Logging kubelet events for node master3 Nov 13 03:58:35.224: INFO: Logging pods the kubelet thinks is on node master3 Nov 13 03:58:35.234: INFO: kube-scheduler-master3 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.234: INFO: Container kube-scheduler ready: true, restart count 2 Nov 13 03:58:35.234: INFO: node-exporter-l4x25 started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 03:58:35.234: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 03:58:35.234: INFO: Container node-exporter ready: true, restart count 0 Nov 13 03:58:35.234: INFO: kube-apiserver-master3 started at 2021-11-12 21:11:20 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.234: INFO: Container kube-apiserver ready: true, restart count 0 Nov 13 03:58:35.234: INFO: kube-controller-manager-master3 started at 2021-11-12 21:11:20 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.234: INFO: Container kube-controller-manager ready: true, restart count 3 Nov 13 03:58:35.234: INFO: kube-multus-ds-amd64-vp8p7 started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.234: INFO: Container kube-multus ready: true, restart count 1 Nov 13 03:58:35.234: INFO: dns-autoscaler-7df78bfcfb-d88qs started at 2021-11-12 21:09:13 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.234: INFO: Container autoscaler ready: true, restart count 1 Nov 13 03:58:35.234: INFO: kube-proxy-tssd5 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.234: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 03:58:35.234: INFO: kube-flannel-vxlrs started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 03:58:35.234: INFO: Init container install-cni ready: true, restart count 0 Nov 13 03:58:35.234: INFO: Container kube-flannel ready: true, restart count 1 W1113 03:58:35.248129 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 03:58:35.311: INFO: Latency metrics for node master3 Nov 13 03:58:35.311: INFO: Logging node info for node node1 Nov 13 03:58:35.315: INFO: Node Info: &Node{ObjectMeta:{node1 6ceb907c-9809-4d18-88c6-b1e10ba80f97 152838 0 2021-11-12 21:07:36 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-12 21:16:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-12 21:20:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-13 01:56:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2021-11-13 03:57:03 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:27 +0000 UTC,LastTransitionTime:2021-11-12 21:11:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 03:58:29 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 03:58:29 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 03:58:29 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 03:58:29 +0000 UTC,LastTransitionTime:2021-11-12 21:08:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7cf6287777fe4e3b9a80df40dea25b6d,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:2125bc5f-9167-464a-b6d0-8e8a192327d3,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432896,},ContainerImage{Names:[localhost:30500/cmk@sha256:9c8712e686132463f8f4d9787e57cff8c3c47bb77fca5fc1d96f6763d2717f29 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:d226f3fd5f293ff513f53573a40c069b89d57d42338a1045b493bf702ac6b1f6 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.5],SizeBytes:60182158,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:760040499bb9aa55e93c6074d538c7fb6c32ef9fc567e4edcc0ef55197276560 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42686989,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:1841df8d4cc71e4f69cc1603012b99570f40d18cd36ee1065933b46f984cf0cd alpine:3.12],SizeBytes:5592390,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 03:58:35.315: INFO: Logging kubelet events for node node1 Nov 13 03:58:35.317: INFO: Logging pods the kubelet thinks is on node node1 Nov 13 03:58:35.333: INFO: kube-proxy-p6kbl started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.333: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 03:58:35.333: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-m62v8 started at 2021-11-12 21:17:59 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.333: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 03:58:35.333: INFO: cmk-init-discover-node1-vkj2s started at 2021-11-12 21:20:18 +0000 UTC (0+3 container statuses recorded) Nov 13 03:58:35.333: INFO: Container discover ready: false, restart count 0 Nov 13 03:58:35.333: INFO: Container init ready: false, restart count 0 Nov 13 03:58:35.333: INFO: Container install ready: false, restart count 0 Nov 13 03:58:35.333: INFO: liveness-http started at 2021-11-13 03:57:46 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.333: INFO: Container liveness-http ready: true, restart count 1 Nov 13 03:58:35.333: INFO: pod-submit-status-0-11 started at 2021-11-13 03:58:31 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.333: INFO: Container busybox ready: false, restart count 0 Nov 13 03:58:35.333: INFO: dapi-test-pod started at 2021-11-13 03:58:34 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.333: INFO: Container test-container ready: false, restart count 0 Nov 13 03:58:35.333: INFO: node-feature-discovery-worker-zgr4c started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.333: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 03:58:35.333: INFO: cmk-webhook-6c9d5f8578-2gp25 started at 2021-11-12 21:21:01 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.333: INFO: Container cmk-webhook ready: true, restart count 0 Nov 13 03:58:35.333: INFO: prometheus-k8s-0 started at 2021-11-12 21:22:14 +0000 UTC (0+4 container statuses recorded) Nov 13 03:58:35.333: INFO: Container config-reloader ready: true, restart count 0 Nov 13 03:58:35.333: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 13 03:58:35.333: INFO: Container grafana ready: true, restart count 0 Nov 13 03:58:35.333: INFO: Container prometheus ready: true, restart count 1 Nov 13 03:58:35.333: INFO: liveness-exec started at 2021-11-13 03:57:46 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.333: INFO: Container liveness-exec ready: true, restart count 0 Nov 13 03:58:35.333: INFO: kube-multus-ds-amd64-4wqsv started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.333: INFO: Container kube-multus ready: true, restart count 1 Nov 13 03:58:35.333: INFO: cmk-4tcdw started at 2021-11-12 21:21:00 +0000 UTC (0+2 container statuses recorded) Nov 13 03:58:35.333: INFO: Container nodereport ready: true, restart count 0 Nov 13 03:58:35.333: INFO: Container reconcile ready: true, restart count 0 Nov 13 03:58:35.333: INFO: node-exporter-hqkfs started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 03:58:35.333: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 03:58:35.333: INFO: Container node-exporter ready: true, restart count 0 Nov 13 03:58:35.333: INFO: nginx-proxy-node1 started at 2021-11-12 21:07:36 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.333: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 03:58:35.333: INFO: kube-flannel-r7bbp started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 03:58:35.333: INFO: Init container install-cni ready: true, restart count 2 Nov 13 03:58:35.333: INFO: Container kube-flannel ready: true, restart count 3 Nov 13 03:58:35.333: INFO: prometheus-operator-585ccfb458-qcz7s started at 2021-11-12 21:21:55 +0000 UTC (0+2 container statuses recorded) Nov 13 03:58:35.333: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 03:58:35.333: INFO: Container prometheus-operator ready: true, restart count 0 Nov 13 03:58:35.333: INFO: collectd-74xkn started at 2021-11-12 21:25:58 +0000 UTC (0+3 container statuses recorded) Nov 13 03:58:35.333: INFO: Container collectd ready: true, restart count 0 Nov 13 03:58:35.333: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 03:58:35.333: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 03:58:35.333: INFO: pod-back-off-image started at 2021-11-13 03:57:03 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.333: INFO: Container back-off ready: false, restart count 3 W1113 03:58:35.351383 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 03:58:35.728: INFO: Latency metrics for node node1 Nov 13 03:58:35.728: INFO: Logging node info for node node2 Nov 13 03:58:35.730: INFO: Node Info: &Node{ObjectMeta:{node2 652722dd-12b1-4529-ba4d-a00c590e4a68 152884 0 2021-11-12 21:07:36 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-12 21:16:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-12 21:20:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-13 01:56:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2021-11-13 03:58:06 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:26 +0000 UTC,LastTransitionTime:2021-11-12 21:11:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 03:58:32 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 03:58:32 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 03:58:32 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 03:58:32 +0000 UTC,LastTransitionTime:2021-11-12 21:08:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fec67f7547064c508c27d44a9bf99ae7,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:0a05ac00-ff21-4518-bf68-3564c7a8cf65,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[localhost:30500/cmk@sha256:9c8712e686132463f8f4d9787e57cff8c3c47bb77fca5fc1d96f6763d2717f29 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:760040499bb9aa55e93c6074d538c7fb6c32ef9fc567e4edcc0ef55197276560 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42686989,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:93c03fd0e56363624f0df9368752e6e7c270d969da058ae5066fe8d668541f51 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 03:58:35.730: INFO: Logging kubelet events for node node2 Nov 13 03:58:35.732: INFO: Logging pods the kubelet thinks is on node node2 Nov 13 03:58:35.746: INFO: node-feature-discovery-worker-mm7xs started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.747: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 03:58:35.747: INFO: collectd-mp2z6 started at 2021-11-12 21:25:58 +0000 UTC (0+3 container statuses recorded) Nov 13 03:58:35.747: INFO: Container collectd ready: true, restart count 0 Nov 13 03:58:35.747: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 03:58:35.747: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 03:58:35.747: INFO: cmk-qhvr7 started at 2021-11-12 21:21:01 +0000 UTC (0+2 container statuses recorded) Nov 13 03:58:35.747: INFO: Container nodereport ready: true, restart count 0 Nov 13 03:58:35.747: INFO: Container reconcile ready: true, restart count 0 Nov 13 03:58:35.747: INFO: startup-76463057-0955-4681-bccf-1ac4895c21b8 started at 2021-11-13 03:57:03 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.747: INFO: Container busybox ready: false, restart count 0 Nov 13 03:58:35.747: INFO: pod-submit-status-2-10 started at 2021-11-13 03:58:17 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.747: INFO: Container busybox ready: false, restart count 0 Nov 13 03:58:35.747: INFO: slave started at 2021-11-13 03:58:20 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.747: INFO: Container cntr ready: true, restart count 0 Nov 13 03:58:35.747: INFO: back-off-cap started at 2021-11-13 03:58:05 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.747: INFO: Container back-off-cap ready: false, restart count 1 Nov 13 03:58:35.747: INFO: pod-submit-status-1-7 started at 2021-11-13 03:58:07 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.747: INFO: Container busybox ready: false, restart count 0 Nov 13 03:58:35.747: INFO: master started at 2021-11-13 03:58:14 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.747: INFO: Container cntr ready: true, restart count 0 Nov 13 03:58:35.747: INFO: cmk-init-discover-node2-5f4hp started at 2021-11-12 21:20:38 +0000 UTC (0+3 container statuses recorded) Nov 13 03:58:35.747: INFO: Container discover ready: false, restart count 0 Nov 13 03:58:35.747: INFO: Container init ready: false, restart count 0 Nov 13 03:58:35.747: INFO: Container install ready: false, restart count 0 Nov 13 03:58:35.747: INFO: kubernetes-metrics-scraper-5558854cb-jmbpk started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.747: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 13 03:58:35.747: INFO: node-exporter-hstd9 started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 03:58:35.747: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 03:58:35.747: INFO: Container node-exporter ready: true, restart count 0 Nov 13 03:58:35.747: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q7m54 started at 2021-11-12 21:25:09 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.747: INFO: Container tas-extender ready: true, restart count 0 Nov 13 03:58:35.747: INFO: busybox-c42b5025-e01d-4836-89bd-db95bd59a25f started at 2021-11-13 03:58:05 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.747: INFO: Container busybox ready: true, restart count 0 Nov 13 03:58:35.747: INFO: private started at 2021-11-13 03:58:24 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.747: INFO: Container cntr ready: true, restart count 0 Nov 13 03:58:35.747: INFO: nginx-proxy-node2 started at 2021-11-12 21:07:36 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.747: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 03:58:35.747: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-7brrh started at 2021-11-12 21:17:59 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.747: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 03:58:35.747: INFO: default started at 2021-11-13 03:58:30 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.747: INFO: Container cntr ready: true, restart count 0 Nov 13 03:58:35.747: INFO: kube-proxy-pzhf2 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.747: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 03:58:35.747: INFO: kube-flannel-mg66r started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 03:58:35.747: INFO: Init container install-cni ready: true, restart count 2 Nov 13 03:58:35.747: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 03:58:35.747: INFO: kube-multus-ds-amd64-2wqj5 started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.747: INFO: Container kube-multus ready: true, restart count 1 Nov 13 03:58:35.747: INFO: kubernetes-dashboard-785dcbb76d-w2mls started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.747: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 13 03:58:35.747: INFO: startup-d02f7247-cc40-49a0-aee3-8a36eedb0ca7 started at 2021-11-13 03:58:32 +0000 UTC (0+1 container statuses recorded) Nov 13 03:58:35.747: INFO: Container busybox ready: false, restart count 0 W1113 03:58:35.768263 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 03:58:36.010: INFO: Latency metrics for node node2 Nov 13 03:58:36.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5182" for this suite. •! Panic [3.069 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be ready immediately after startupProbe succeeds [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:400 Test Panicked runtime error: invalid memory address or nil pointer dereference /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 Full Stack Trace k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x109 panic(0x653b640, 0x9beb6a0) /usr/local/go/src/runtime/panic.go:965 +0x1b9 k8s.io/kubernetes/test/e2e/framework/pod.podContainerStarted.func1(0xc0015b8f0c, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/resource.go:334 +0x17b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc003a98500, 0xc0015b8f00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitFor(0xc0004dde78, 0xc003a98500, 0xc001fbb9e0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:541 +0x128 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollInternal(0xc0004dde78, 0xc003a98500, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:427 +0x87 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0004dde78, 0xc003a98500, 0xc0004dde78, 0xc003a98500) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:452 +0x74 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc003a98500, 0x14, 0xc004aa0ba0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodContainerStarted(0x779f8f8, 0xc001fc3ce0, 0xc0027a19e0, 0x14, 0xc004aa0ba0, 0x2c, 0x0, 0x45d964b800, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/wait.go:554 +0x92 k8s.io/kubernetes/test/e2e/common/node.glob..func2.18() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:417 +0x39e k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001e01b00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001e01b00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001e01b00, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:58:36.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ssh STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:36 Nov 13 03:58:36.244: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:58:36.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ssh-5293" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should SSH to all nodes and run commands [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:42 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:58:34.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Nov 13 03:58:34.156: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that prints his name and namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:134 STEP: creating the pod Nov 13 03:58:34.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-4625 create -f -' Nov 13 03:58:34.614: INFO: stderr: "" Nov 13 03:58:34.614: INFO: stdout: "pod/dapi-test-pod created\n" STEP: checking if name and namespace were passed correctly Nov 13 03:58:38.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-4625 logs dapi-test-pod test-container' Nov 13 03:58:38.788: INFO: stderr: "" Nov 13 03:58:38.788: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.233.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-4625\nMY_POD_IP=10.244.3.113\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.207\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" Nov 13 03:58:38.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-4625 logs dapi-test-pod test-container' Nov 13 03:58:38.976: INFO: stderr: "" Nov 13 03:58:38.976: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.233.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-4625\nMY_POD_IP=10.244.3.113\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.207\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:58:38.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-4625" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Downward API should create a pod that prints his name and namespace","total":-1,"completed":3,"skipped":263,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:58:14.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename mount-propagation STEP: Waiting for a default service account to be provisioned in namespace [It] should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 Nov 13 03:58:14.554: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:58:16.558: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:58:18.558: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:58:20.561: INFO: The status of Pod master is Running (Ready = true) Nov 13 03:58:20.579: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:58:22.582: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:58:24.583: INFO: The status of Pod slave is Running (Ready = true) Nov 13 03:58:24.597: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:58:26.601: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:58:28.600: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:58:30.604: INFO: The status of Pod private is Running (Ready = true) Nov 13 03:58:30.619: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:58:32.622: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:58:34.621: INFO: The status of Pod default is Running (Ready = true) Nov 13 03:58:34.626: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-8414 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:34.626: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:34.711: INFO: Exec stderr: "" Nov 13 03:58:34.714: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-8414 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:34.714: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:34.799: INFO: Exec stderr: "" Nov 13 03:58:34.801: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-8414 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:34.801: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:34.979: INFO: Exec stderr: "" Nov 13 03:58:34.982: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-8414 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:34.982: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:35.156: INFO: Exec stderr: "" Nov 13 03:58:35.159: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-8414 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:35.159: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:35.343: INFO: Exec stderr: "" Nov 13 03:58:35.345: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-8414 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:35.345: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:35.424: INFO: Exec stderr: "" Nov 13 03:58:35.427: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-8414 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:35.427: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:35.557: INFO: Exec stderr: "" Nov 13 03:58:35.560: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-8414 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:35.560: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:35.677: INFO: Exec stderr: "" Nov 13 03:58:35.680: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-8414 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:35.680: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:35.759: INFO: Exec stderr: "" Nov 13 03:58:35.763: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-8414 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:35.763: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:35.845: INFO: Exec stderr: "" Nov 13 03:58:35.848: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-8414 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:35.848: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:35.929: INFO: Exec stderr: "" Nov 13 03:58:35.931: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-8414 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:35.931: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:36.014: INFO: Exec stderr: "" Nov 13 03:58:36.016: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-8414 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:36.017: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:36.100: INFO: Exec stderr: "" Nov 13 03:58:36.102: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-8414 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:36.103: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:36.186: INFO: Exec stderr: "" Nov 13 03:58:36.188: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-8414 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:36.188: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:36.278: INFO: Exec stderr: "" Nov 13 03:58:36.282: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-8414 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:36.282: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:36.365: INFO: Exec stderr: "" Nov 13 03:58:36.367: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-master /mnt/test/master; echo master > /mnt/test/master/file] Namespace:mount-propagation-8414 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:36.367: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:36.455: INFO: Exec stderr: "" Nov 13 03:58:36.459: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-slave /mnt/test/slave; echo slave > /mnt/test/slave/file] Namespace:mount-propagation-8414 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:36.459: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:36.543: INFO: Exec stderr: "" Nov 13 03:58:36.545: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-private /mnt/test/private; echo private > /mnt/test/private/file] Namespace:mount-propagation-8414 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:36.545: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:36.633: INFO: Exec stderr: "" Nov 13 03:58:36.636: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-default /mnt/test/default; echo default > /mnt/test/default/file] Namespace:mount-propagation-8414 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:36.636: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:36.725: INFO: Exec stderr: "" Nov 13 03:58:40.743: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-8414"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-8414"/host; echo host > "/var/lib/kubelet/mount-propagation-8414"/host/file] Namespace:mount-propagation-8414 PodName:hostexec-node2-qfklx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 03:58:40.743: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:40.837: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-8414 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:40.837: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:40.917: INFO: pod master mount master: stdout: "master", stderr: "" error: Nov 13 03:58:40.919: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-8414 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:40.919: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:41.011: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Nov 13 03:58:41.013: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-8414 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:41.013: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:41.090: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Nov 13 03:58:41.092: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-8414 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:41.092: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:41.176: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Nov 13 03:58:41.178: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-8414 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:41.178: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:41.268: INFO: pod master mount host: stdout: "host", stderr: "" error: Nov 13 03:58:41.270: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-8414 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:41.271: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:41.347: INFO: pod slave mount master: stdout: "master", stderr: "" error: Nov 13 03:58:41.350: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-8414 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:41.350: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:41.427: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: Nov 13 03:58:41.433: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-8414 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:41.434: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:41.519: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Nov 13 03:58:41.522: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-8414 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:41.522: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:41.599: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Nov 13 03:58:41.602: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-8414 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:41.602: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:41.683: INFO: pod slave mount host: stdout: "host", stderr: "" error: Nov 13 03:58:41.685: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-8414 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:41.685: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:41.775: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 Nov 13 03:58:41.777: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-8414 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:41.777: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:41.867: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Nov 13 03:58:41.870: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-8414 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:41.870: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:41.948: INFO: pod private mount private: stdout: "private", stderr: "" error: Nov 13 03:58:41.950: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-8414 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:41.950: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:42.033: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Nov 13 03:58:42.036: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-8414 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:42.036: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:42.117: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 Nov 13 03:58:42.120: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-8414 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:42.120: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:42.200: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 Nov 13 03:58:42.204: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-8414 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:42.204: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:42.280: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Nov 13 03:58:42.283: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-8414 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:42.283: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:42.364: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Nov 13 03:58:42.366: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-8414 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:42.366: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:42.449: INFO: pod default mount default: stdout: "default", stderr: "" error: Nov 13 03:58:42.452: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-8414 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:42.452: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:42.526: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 Nov 13 03:58:42.526: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test `cat "/var/lib/kubelet/mount-propagation-8414"/master/file` = master] Namespace:mount-propagation-8414 PodName:hostexec-node2-qfklx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 03:58:42.526: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:42.630: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -e "/var/lib/kubelet/mount-propagation-8414"/slave/file] Namespace:mount-propagation-8414 PodName:hostexec-node2-qfklx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 03:58:42.630: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:42.717: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/var/lib/kubelet/mount-propagation-8414"/host] Namespace:mount-propagation-8414 PodName:hostexec-node2-qfklx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 03:58:42.717: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:42.807: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/default] Namespace:mount-propagation-8414 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:42.807: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:42.893: INFO: Exec stderr: "" Nov 13 03:58:42.896: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/private] Namespace:mount-propagation-8414 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:42.896: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:43.017: INFO: Exec stderr: "" Nov 13 03:58:43.019: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/slave] Namespace:mount-propagation-8414 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:43.019: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:43.104: INFO: Exec stderr: "" Nov 13 03:58:43.107: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/master] Namespace:mount-propagation-8414 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:58:43.107: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:58:43.196: INFO: Exec stderr: "" Nov 13 03:58:43.197: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -rf "/var/lib/kubelet/mount-propagation-8414"] Namespace:mount-propagation-8414 PodName:hostexec-node2-qfklx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 03:58:43.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod hostexec-node2-qfklx in namespace mount-propagation-8414 [AfterEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:58:43.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "mount-propagation-8414" for this suite. • [SLOW TEST:28.801 seconds] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 ------------------------------ {"msg":"PASSED [sig-node] Mount propagation should propagate mounts to the host","total":-1,"completed":5,"skipped":383,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:58:43.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 Nov 13 03:58:43.669: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:58:43.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-3699" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 can disable an AppArmor profile, using unconfined [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:47 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:284 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:57:45.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Nov 13 03:57:45.990: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 Nov 13 03:57:46.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-3322 create -f -' Nov 13 03:57:46.407: INFO: stderr: "" Nov 13 03:57:46.407: INFO: stdout: "pod/liveness-exec created\n" Nov 13 03:57:46.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-3322 create -f -' Nov 13 03:57:46.730: INFO: stderr: "" Nov 13 03:57:46.730: INFO: stdout: "pod/liveness-http created\n" STEP: Check restarts Nov 13 03:57:54.739: INFO: Pod: liveness-exec, restart count:0 Nov 13 03:57:54.740: INFO: Pod: liveness-http, restart count:0 Nov 13 03:57:56.744: INFO: Pod: liveness-exec, restart count:0 Nov 13 03:57:56.744: INFO: Pod: liveness-http, restart count:0 Nov 13 03:57:58.747: INFO: Pod: liveness-http, restart count:0 Nov 13 03:57:58.747: INFO: Pod: liveness-exec, restart count:0 Nov 13 03:58:00.752: INFO: Pod: liveness-exec, restart count:0 Nov 13 03:58:00.752: INFO: Pod: liveness-http, restart count:0 Nov 13 03:58:02.755: INFO: Pod: liveness-http, restart count:0 Nov 13 03:58:02.755: INFO: Pod: liveness-exec, restart count:0 Nov 13 03:58:04.759: INFO: Pod: liveness-http, restart count:0 Nov 13 03:58:04.759: INFO: Pod: liveness-exec, restart count:0 Nov 13 03:58:06.763: INFO: Pod: liveness-exec, restart count:0 Nov 13 03:58:06.763: INFO: Pod: liveness-http, restart count:0 Nov 13 03:58:08.767: INFO: Pod: liveness-exec, restart count:0 Nov 13 03:58:08.767: INFO: Pod: liveness-http, restart count:0 Nov 13 03:58:10.772: INFO: Pod: liveness-http, restart count:0 Nov 13 03:58:10.772: INFO: Pod: liveness-exec, restart count:0 Nov 13 03:58:12.776: INFO: Pod: liveness-http, restart count:0 Nov 13 03:58:12.776: INFO: Pod: liveness-exec, restart count:0 Nov 13 03:58:14.780: INFO: Pod: liveness-exec, restart count:0 Nov 13 03:58:14.780: INFO: Pod: liveness-http, restart count:0 Nov 13 03:58:16.785: INFO: Pod: liveness-exec, restart count:0 Nov 13 03:58:16.785: INFO: Pod: liveness-http, restart count:0 Nov 13 03:58:18.788: INFO: Pod: liveness-http, restart count:0 Nov 13 03:58:18.788: INFO: Pod: liveness-exec, restart count:0 Nov 13 03:58:20.792: INFO: Pod: liveness-http, restart count:0 Nov 13 03:58:20.792: INFO: Pod: liveness-exec, restart count:0 Nov 13 03:58:22.796: INFO: Pod: liveness-http, restart count:0 Nov 13 03:58:22.796: INFO: Pod: liveness-exec, restart count:0 Nov 13 03:58:24.800: INFO: Pod: liveness-exec, restart count:0 Nov 13 03:58:24.800: INFO: Pod: liveness-http, restart count:0 Nov 13 03:58:26.804: INFO: Pod: liveness-exec, restart count:0 Nov 13 03:58:26.804: INFO: Pod: liveness-http, restart count:0 Nov 13 03:58:28.808: INFO: Pod: liveness-exec, restart count:0 Nov 13 03:58:28.808: INFO: Pod: liveness-http, restart count:0 Nov 13 03:58:30.812: INFO: Pod: liveness-exec, restart count:0 Nov 13 03:58:30.812: INFO: Pod: liveness-http, restart count:1 Nov 13 03:58:30.812: INFO: Saw liveness-http restart, succeeded... Nov 13 03:58:32.815: INFO: Pod: liveness-exec, restart count:0 Nov 13 03:58:34.818: INFO: Pod: liveness-exec, restart count:0 Nov 13 03:58:36.821: INFO: Pod: liveness-exec, restart count:0 Nov 13 03:58:38.824: INFO: Pod: liveness-exec, restart count:0 Nov 13 03:58:40.828: INFO: Pod: liveness-exec, restart count:0 Nov 13 03:58:42.831: INFO: Pod: liveness-exec, restart count:0 Nov 13 03:58:44.836: INFO: Pod: liveness-exec, restart count:0 Nov 13 03:58:46.839: INFO: Pod: liveness-exec, restart count:0 Nov 13 03:58:48.842: INFO: Pod: liveness-exec, restart count:0 Nov 13 03:58:50.846: INFO: Pod: liveness-exec, restart count:0 Nov 13 03:58:52.849: INFO: Pod: liveness-exec, restart count:0 Nov 13 03:58:54.853: INFO: Pod: liveness-exec, restart count:0 Nov 13 03:58:56.857: INFO: Pod: liveness-exec, restart count:0 Nov 13 03:58:58.861: INFO: Pod: liveness-exec, restart count:1 Nov 13 03:58:58.861: INFO: Saw liveness-exec restart, succeeded... [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:58:58.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-3322" for this suite. • [SLOW TEST:72.911 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Liveness /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:66 liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Liveness liveness pods should be automatically restarted","total":-1,"completed":3,"skipped":764,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:58:59.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Nov 13 03:58:59.170: INFO: Waiting up to 5m0s for pod "security-context-4dbd379a-2a4c-4036-8872-549b6325604e" in namespace "security-context-7475" to be "Succeeded or Failed" Nov 13 03:58:59.172: INFO: Pod "security-context-4dbd379a-2a4c-4036-8872-549b6325604e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.739318ms Nov 13 03:59:01.176: INFO: Pod "security-context-4dbd379a-2a4c-4036-8872-549b6325604e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006658578s Nov 13 03:59:03.180: INFO: Pod "security-context-4dbd379a-2a4c-4036-8872-549b6325604e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010173941s STEP: Saw pod success Nov 13 03:59:03.180: INFO: Pod "security-context-4dbd379a-2a4c-4036-8872-549b6325604e" satisfied condition "Succeeded or Failed" Nov 13 03:59:03.183: INFO: Trying to get logs from node node2 pod security-context-4dbd379a-2a4c-4036-8872-549b6325604e container test-container: STEP: delete the pod Nov 13 03:59:03.196: INFO: Waiting for pod security-context-4dbd379a-2a4c-4036-8872-549b6325604e to disappear Nov 13 03:59:03.198: INFO: Pod security-context-4dbd379a-2a4c-4036-8872-549b6325604e no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:59:03.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-7475" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":4,"skipped":908,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:59:03.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:59:07.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1068" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":5,"skipped":1077,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:58:44.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274 STEP: Creating pod liveness-42d78a3e-065c-43d7-8662-f3e41e9562f6 in namespace container-probe-3621 Nov 13 03:58:48.213: INFO: Started pod liveness-42d78a3e-065c-43d7-8662-f3e41e9562f6 in namespace container-probe-3621 STEP: checking the pod's current state and verifying that restartCount is present Nov 13 03:58:48.216: INFO: Initial restart count of pod liveness-42d78a3e-065c-43d7-8662-f3e41e9562f6 is 0 Nov 13 03:59:10.269: INFO: Restart count of pod container-probe-3621/liveness-42d78a3e-065c-43d7-8662-f3e41e9562f6 is now 1 (22.052332683s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:59:10.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3621" for this suite. • [SLOW TEST:26.104 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":6,"skipped":838,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:58:05.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:217 STEP: Creating pod busybox-c42b5025-e01d-4836-89bd-db95bd59a25f in namespace container-probe-8876 Nov 13 03:58:17.666: INFO: Started pod busybox-c42b5025-e01d-4836-89bd-db95bd59a25f in namespace container-probe-8876 STEP: checking the pod's current state and verifying that restartCount is present Nov 13 03:58:17.670: INFO: Initial restart count of pod busybox-c42b5025-e01d-4836-89bd-db95bd59a25f is 0 Nov 13 03:59:11.782: INFO: Restart count of pod container-probe-8876/busybox-c42b5025-e01d-4836-89bd-db95bd59a25f is now 1 (54.112593615s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:59:11.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8876" for this suite. • [SLOW TEST:66.175 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:217 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":6,"skipped":484,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:59:07.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups Nov 13 03:59:07.608: INFO: Waiting up to 5m0s for pod "security-context-a4283f2d-7c83-4b5d-94ed-e161ad09dd96" in namespace "security-context-5955" to be "Succeeded or Failed" Nov 13 03:59:07.611: INFO: Pod "security-context-a4283f2d-7c83-4b5d-94ed-e161ad09dd96": Phase="Pending", Reason="", readiness=false. Elapsed: 3.114555ms Nov 13 03:59:09.614: INFO: Pod "security-context-a4283f2d-7c83-4b5d-94ed-e161ad09dd96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005972482s Nov 13 03:59:11.618: INFO: Pod "security-context-a4283f2d-7c83-4b5d-94ed-e161ad09dd96": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010751259s Nov 13 03:59:13.622: INFO: Pod "security-context-a4283f2d-7c83-4b5d-94ed-e161ad09dd96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014673745s STEP: Saw pod success Nov 13 03:59:13.622: INFO: Pod "security-context-a4283f2d-7c83-4b5d-94ed-e161ad09dd96" satisfied condition "Succeeded or Failed" Nov 13 03:59:13.624: INFO: Trying to get logs from node node2 pod security-context-a4283f2d-7c83-4b5d-94ed-e161ad09dd96 container test-container: STEP: delete the pod Nov 13 03:59:13.636: INFO: Waiting for pod security-context-a4283f2d-7c83-4b5d-94ed-e161ad09dd96 to disappear Nov 13 03:59:13.638: INFO: Pod security-context-a4283f2d-7c83-4b5d-94ed-e161ad09dd96 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:59:13.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-5955" for this suite. • [SLOW TEST:6.069 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":6,"skipped":1082,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:59:10.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:159 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:59:14.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-613" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":7,"skipped":890,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:59:13.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 Nov 13 03:59:13.867: INFO: Waiting up to 5m0s for pod "busybox-user-0-9e699c09-6f22-4906-986f-b0d4e5aafb35" in namespace "security-context-test-245" to be "Succeeded or Failed" Nov 13 03:59:13.869: INFO: Pod "busybox-user-0-9e699c09-6f22-4906-986f-b0d4e5aafb35": Phase="Pending", Reason="", readiness=false. Elapsed: 1.964766ms Nov 13 03:59:15.873: INFO: Pod "busybox-user-0-9e699c09-6f22-4906-986f-b0d4e5aafb35": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006168187s Nov 13 03:59:17.877: INFO: Pod "busybox-user-0-9e699c09-6f22-4906-986f-b0d4e5aafb35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010179811s Nov 13 03:59:17.877: INFO: Pod "busybox-user-0-9e699c09-6f22-4906-986f-b0d4e5aafb35" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:59:17.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-245" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":7,"skipped":1177,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:59:14.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:59:19.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3526" for this suite. • [SLOW TEST:5.097 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":8,"skipped":941,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:59:19.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 Nov 13 03:59:19.715: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-9e689a6a-ac5a-4608-a691-2c91ac430989" in namespace "security-context-test-3653" to be "Succeeded or Failed" Nov 13 03:59:19.718: INFO: Pod "busybox-privileged-true-9e689a6a-ac5a-4608-a691-2c91ac430989": Phase="Pending", Reason="", readiness=false. Elapsed: 3.069075ms Nov 13 03:59:21.721: INFO: Pod "busybox-privileged-true-9e689a6a-ac5a-4608-a691-2c91ac430989": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006195999s Nov 13 03:59:23.727: INFO: Pod "busybox-privileged-true-9e689a6a-ac5a-4608-a691-2c91ac430989": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012526651s Nov 13 03:59:25.732: INFO: Pod "busybox-privileged-true-9e689a6a-ac5a-4608-a691-2c91ac430989": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017862865s Nov 13 03:59:25.733: INFO: Pod "busybox-privileged-true-9e689a6a-ac5a-4608-a691-2c91ac430989" satisfied condition "Succeeded or Failed" Nov 13 03:59:25.738: INFO: Got logs for pod "busybox-privileged-true-9e689a6a-ac5a-4608-a691-2c91ac430989": "" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:59:25.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3653" for this suite. • [SLOW TEST:6.062 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with privileged /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232 should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":9,"skipped":966,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:59:25.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Nov 13 03:59:25.879: INFO: Waiting up to 5m0s for pod "security-context-29a3aa36-9b8e-47d5-88b0-15ce1f27c7da" in namespace "security-context-7921" to be "Succeeded or Failed" Nov 13 03:59:25.881: INFO: Pod "security-context-29a3aa36-9b8e-47d5-88b0-15ce1f27c7da": Phase="Pending", Reason="", readiness=false. Elapsed: 1.961528ms Nov 13 03:59:27.885: INFO: Pod "security-context-29a3aa36-9b8e-47d5-88b0-15ce1f27c7da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005180964s Nov 13 03:59:29.890: INFO: Pod "security-context-29a3aa36-9b8e-47d5-88b0-15ce1f27c7da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010795211s STEP: Saw pod success Nov 13 03:59:29.890: INFO: Pod "security-context-29a3aa36-9b8e-47d5-88b0-15ce1f27c7da" satisfied condition "Succeeded or Failed" Nov 13 03:59:29.893: INFO: Trying to get logs from node node2 pod security-context-29a3aa36-9b8e-47d5-88b0-15ce1f27c7da container test-container: STEP: delete the pod Nov 13 03:59:30.007: INFO: Waiting for pod security-context-29a3aa36-9b8e-47d5-88b0-15ce1f27c7da to disappear Nov 13 03:59:30.009: INFO: Pod security-context-29a3aa36-9b8e-47d5-88b0-15ce1f27c7da no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:59:30.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-7921" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]","total":-1,"completed":10,"skipped":1016,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:58:36.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a failing exec liveness probe that took longer than the timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:254 STEP: Creating pod busybox-a23a234d-f463-40b6-954d-e648713f4ef4 in namespace container-probe-7956 Nov 13 03:58:40.372: INFO: Started pod busybox-a23a234d-f463-40b6-954d-e648713f4ef4 in namespace container-probe-7956 STEP: checking the pod's current state and verifying that restartCount is present Nov 13 03:58:40.375: INFO: Initial restart count of pod busybox-a23a234d-f463-40b6-954d-e648713f4ef4 is 0 Nov 13 03:59:30.533: INFO: Restart count of pod container-probe-7956/busybox-a23a234d-f463-40b6-954d-e648713f4ef4 is now 1 (50.158587283s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:59:30.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7956" for this suite. • [SLOW TEST:54.216 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a failing exec liveness probe that took longer than the timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:254 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout","total":-1,"completed":5,"skipped":489,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:59:30.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should create and update a lease in the kube-node-lease namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:50 STEP: check that lease for this Kubelet exists in the kube-node-lease namespace STEP: check that node lease is updated at least once within the lease duration [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:59:30.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-6889" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":6,"skipped":541,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:59:30.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:59:34.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6506" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":11,"skipped":1061,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:59:30.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 Nov 13 03:59:30.776: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-6315" to be "Succeeded or Failed" Nov 13 03:59:30.778: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 1.793155ms Nov 13 03:59:32.783: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00687185s Nov 13 03:59:34.787: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01163006s Nov 13 03:59:34.788: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:59:34.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6315" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":7,"skipped":563,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:59:34.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:201 STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node STEP: Watching for error events or started pod STEP: Checking that the pod was rejected [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:59:36.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-293" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]","total":-1,"completed":8,"skipped":593,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:59:37.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-problem-detector STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:52 Nov 13 03:59:37.153: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:59:37.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-problem-detector-4332" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.029 seconds] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should run without error [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:60 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:53 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:59:37.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 Nov 13 03:59:37.273: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:59:37.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-7642" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 should enforce an AppArmor profile [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:43 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:284 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:59:11.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:449 STEP: Creating pod liveness-override-5ded40ab-c98f-495d-b85e-688f68dfe75a in namespace container-probe-9368 Nov 13 03:59:19.876: INFO: Started pod liveness-override-5ded40ab-c98f-495d-b85e-688f68dfe75a in namespace container-probe-9368 STEP: checking the pod's current state and verifying that restartCount is present Nov 13 03:59:19.878: INFO: Initial restart count of pod liveness-override-5ded40ab-c98f-495d-b85e-688f68dfe75a is 1 Nov 13 03:59:37.922: INFO: Restart count of pod container-probe-9368/liveness-override-5ded40ab-c98f-495d-b85e-688f68dfe75a is now 2 (18.043258394s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:59:37.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9368" for this suite. • [SLOW TEST:26.105 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:449 ------------------------------ {"msg":"PASSED [sig-node] Probing container should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod]","total":-1,"completed":7,"skipped":499,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:59:34.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Nov 13 03:59:34.487: INFO: Waiting up to 5m0s for pod "security-context-554cd8ef-e45b-4da8-a9d5-d0b1c4db555c" in namespace "security-context-1082" to be "Succeeded or Failed" Nov 13 03:59:34.491: INFO: Pod "security-context-554cd8ef-e45b-4da8-a9d5-d0b1c4db555c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.990658ms Nov 13 03:59:36.493: INFO: Pod "security-context-554cd8ef-e45b-4da8-a9d5-d0b1c4db555c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006420482s Nov 13 03:59:38.497: INFO: Pod "security-context-554cd8ef-e45b-4da8-a9d5-d0b1c4db555c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010705726s STEP: Saw pod success Nov 13 03:59:38.497: INFO: Pod "security-context-554cd8ef-e45b-4da8-a9d5-d0b1c4db555c" satisfied condition "Succeeded or Failed" Nov 13 03:59:38.500: INFO: Trying to get logs from node node2 pod security-context-554cd8ef-e45b-4da8-a9d5-d0b1c4db555c container test-container: STEP: delete the pod Nov 13 03:59:38.565: INFO: Waiting for pod security-context-554cd8ef-e45b-4da8-a9d5-d0b1c4db555c to disappear Nov 13 03:59:38.567: INFO: Pod security-context-554cd8ef-e45b-4da8-a9d5-d0b1c4db555c no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:59:38.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-1082" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]","total":-1,"completed":12,"skipped":1209,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:59:38.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Nov 13 03:59:38.094: INFO: Waiting up to 5m0s for pod "security-context-8c58d518-15ea-4caf-9f38-09d8c47899ea" in namespace "security-context-7476" to be "Succeeded or Failed" Nov 13 03:59:38.097: INFO: Pod "security-context-8c58d518-15ea-4caf-9f38-09d8c47899ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.4116ms Nov 13 03:59:40.099: INFO: Pod "security-context-8c58d518-15ea-4caf-9f38-09d8c47899ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00513206s Nov 13 03:59:42.103: INFO: Pod "security-context-8c58d518-15ea-4caf-9f38-09d8c47899ea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008499153s Nov 13 03:59:44.109: INFO: Pod "security-context-8c58d518-15ea-4caf-9f38-09d8c47899ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014320118s STEP: Saw pod success Nov 13 03:59:44.109: INFO: Pod "security-context-8c58d518-15ea-4caf-9f38-09d8c47899ea" satisfied condition "Succeeded or Failed" Nov 13 03:59:44.111: INFO: Trying to get logs from node node2 pod security-context-8c58d518-15ea-4caf-9f38-09d8c47899ea container test-container: STEP: delete the pod Nov 13 03:59:44.125: INFO: Waiting for pod security-context-8c58d518-15ea-4caf-9f38-09d8c47899ea to disappear Nov 13 03:59:44.127: INFO: Pod security-context-8c58d518-15ea-4caf-9f38-09d8c47899ea no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:59:44.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-7476" for this suite. • [SLOW TEST:6.075 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]","total":-1,"completed":8,"skipped":564,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:59:37.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777 STEP: submitting the pod to kubernetes STEP: patching pod status with condition "k8s.io/test-condition1" to true STEP: patching pod status with condition "k8s.io/test-condition2" to true STEP: patching pod status with condition "k8s.io/test-condition1" to false [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:59:55.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7225" for this suite. • [SLOW TEST:18.097 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777 ------------------------------ {"msg":"PASSED [sig-node] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":9,"skipped":823,"failed":0} Nov 13 03:59:55.598: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:57:03.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods W1113 03:57:03.416988 37 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 03:57:03.417: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 03:57:03.418: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:202 [It] should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 STEP: creating pods that should always exit 1 and terminating the pod after a random delay Nov 13 03:57:11.908: INFO: watch delete seen for pod-submit-status-1-0 Nov 13 03:57:11.908: INFO: Pod pod-submit-status-1-0 on node node1 timings total=8.487540621s t=700ms run=0s execute=0s Nov 13 03:57:13.660: INFO: watch delete seen for pod-submit-status-2-0 Nov 13 03:57:13.660: INFO: Pod pod-submit-status-2-0 on node node1 timings total=10.239457037s t=240ms run=0s execute=0s Nov 13 03:57:15.269: INFO: watch delete seen for pod-submit-status-1-1 Nov 13 03:57:15.269: INFO: Pod pod-submit-status-1-1 on node node1 timings total=3.360971434s t=987ms run=0s execute=0s Nov 13 03:57:20.061: INFO: watch delete seen for pod-submit-status-2-1 Nov 13 03:57:20.062: INFO: Pod pod-submit-status-2-1 on node node1 timings total=6.401213402s t=1.636s run=0s execute=0s Nov 13 03:57:23.177: INFO: watch delete seen for pod-submit-status-0-0 Nov 13 03:57:23.177: INFO: Pod pod-submit-status-0-0 on node node2 timings total=19.756431136s t=263ms run=0s execute=0s Nov 13 03:57:25.460: INFO: watch delete seen for pod-submit-status-1-2 Nov 13 03:57:25.460: INFO: Pod pod-submit-status-1-2 on node node1 timings total=10.190485751s t=802ms run=0s execute=0s Nov 13 03:57:30.861: INFO: watch delete seen for pod-submit-status-1-3 Nov 13 03:57:30.861: INFO: Pod pod-submit-status-1-3 on node node1 timings total=5.400785848s t=784ms run=0s execute=0s Nov 13 03:57:31.260: INFO: watch delete seen for pod-submit-status-2-2 Nov 13 03:57:31.260: INFO: Pod pod-submit-status-2-2 on node node1 timings total=11.198226758s t=1.903s run=0s execute=0s Nov 13 03:57:33.661: INFO: watch delete seen for pod-submit-status-0-1 Nov 13 03:57:33.662: INFO: Pod pod-submit-status-0-1 on node node1 timings total=10.484222784s t=1.634s run=0s execute=0s Nov 13 03:57:39.260: INFO: watch delete seen for pod-submit-status-0-2 Nov 13 03:57:39.260: INFO: Pod pod-submit-status-0-2 on node node1 timings total=5.59872721s t=1.477s run=0s execute=0s Nov 13 03:57:42.036: INFO: watch delete seen for pod-submit-status-0-3 Nov 13 03:57:42.036: INFO: Pod pod-submit-status-0-3 on node node1 timings total=2.775714229s t=794ms run=0s execute=0s Nov 13 03:57:43.752: INFO: watch delete seen for pod-submit-status-2-3 Nov 13 03:57:43.752: INFO: Pod pod-submit-status-2-3 on node node2 timings total=12.492571321s t=1.465s run=0s execute=0s Nov 13 03:57:48.261: INFO: watch delete seen for pod-submit-status-0-4 Nov 13 03:57:48.261: INFO: Pod pod-submit-status-0-4 on node node1 timings total=6.224787916s t=1.789s run=0s execute=0s Nov 13 03:57:48.861: INFO: watch delete seen for pod-submit-status-2-4 Nov 13 03:57:48.861: INFO: Pod pod-submit-status-2-4 on node node1 timings total=5.108138492s t=1.981s run=0s execute=0s Nov 13 03:57:51.963: INFO: watch delete seen for pod-submit-status-1-4 Nov 13 03:57:51.963: INFO: Pod pod-submit-status-1-4 on node node2 timings total=21.101714342s t=1.043s run=0s execute=0s Nov 13 03:57:54.660: INFO: watch delete seen for pod-submit-status-2-5 Nov 13 03:57:54.661: INFO: Pod pod-submit-status-2-5 on node node1 timings total=5.799866573s t=1.074s run=0s execute=0s Nov 13 03:57:55.355: INFO: watch delete seen for pod-submit-status-0-5 Nov 13 03:57:55.355: INFO: Pod pod-submit-status-0-5 on node node2 timings total=7.09366296s t=1.701s run=2s execute=0s Nov 13 03:57:58.809: INFO: watch delete seen for pod-submit-status-2-6 Nov 13 03:57:58.809: INFO: Pod pod-submit-status-2-6 on node node2 timings total=4.148659357s t=907ms run=0s execute=0s Nov 13 03:57:59.752: INFO: watch delete seen for pod-submit-status-0-6 Nov 13 03:57:59.752: INFO: Pod pod-submit-status-0-6 on node node2 timings total=4.397272935s t=1.612s run=0s execute=0s Nov 13 03:58:00.564: INFO: watch delete seen for pod-submit-status-1-5 Nov 13 03:58:00.564: INFO: Pod pod-submit-status-1-5 on node node2 timings total=8.601458237s t=1.555s run=0s execute=0s Nov 13 03:58:02.214: INFO: watch delete seen for pod-submit-status-2-7 Nov 13 03:58:02.214: INFO: Pod pod-submit-status-2-7 on node node2 timings total=3.404651168s t=1.22s run=0s execute=0s Nov 13 03:58:07.560: INFO: watch delete seen for pod-submit-status-1-6 Nov 13 03:58:07.560: INFO: Pod pod-submit-status-1-6 on node node2 timings total=6.99538195s t=1.583s run=0s execute=0s Nov 13 03:58:08.752: INFO: watch delete seen for pod-submit-status-0-7 Nov 13 03:58:08.752: INFO: Pod pod-submit-status-0-7 on node node2 timings total=9.000219467s t=1.097s run=0s execute=0s Nov 13 03:58:10.953: INFO: watch delete seen for pod-submit-status-2-8 Nov 13 03:58:10.954: INFO: Pod pod-submit-status-2-8 on node node2 timings total=8.739386546s t=911ms run=0s execute=0s Nov 13 03:58:12.352: INFO: watch delete seen for pod-submit-status-0-8 Nov 13 03:58:12.352: INFO: Pod pod-submit-status-0-8 on node node2 timings total=3.59954616s t=1.614s run=0s execute=0s Nov 13 03:58:17.839: INFO: watch delete seen for pod-submit-status-2-9 Nov 13 03:58:17.839: INFO: Pod pod-submit-status-2-9 on node node2 timings total=6.885017184s t=1.33s run=0s execute=0s Nov 13 03:58:21.961: INFO: watch delete seen for pod-submit-status-0-9 Nov 13 03:58:21.961: INFO: Pod pod-submit-status-0-9 on node node2 timings total=9.609266305s t=1.935s run=0s execute=0s Nov 13 03:58:31.447: INFO: watch delete seen for pod-submit-status-0-10 Nov 13 03:58:31.447: INFO: Pod pod-submit-status-0-10 on node node2 timings total=9.485241276s t=149ms run=0s execute=0s Nov 13 03:58:41.360: INFO: watch delete seen for pod-submit-status-0-11 Nov 13 03:58:41.360: INFO: Pod pod-submit-status-0-11 on node node1 timings total=9.912966967s t=1.368s run=0s execute=0s Nov 13 03:58:45.373: INFO: watch delete seen for pod-submit-status-0-12 Nov 13 03:58:45.373: INFO: Pod pod-submit-status-0-12 on node node1 timings total=4.013498074s t=1.809s run=2s execute=0s Nov 13 03:59:01.356: INFO: watch delete seen for pod-submit-status-0-13 Nov 13 03:59:01.356: INFO: Pod pod-submit-status-0-13 on node node1 timings total=15.982285032s t=1.229s run=2s execute=0s Nov 13 03:59:03.217: INFO: watch delete seen for pod-submit-status-2-10 Nov 13 03:59:03.217: INFO: Pod pod-submit-status-2-10 on node node2 timings total=45.378195291s t=681ms run=0s execute=0s Nov 13 03:59:03.225: INFO: watch delete seen for pod-submit-status-1-7 Nov 13 03:59:03.225: INFO: Pod pod-submit-status-1-7 on node node2 timings total=55.664964446s t=1.347s run=0s execute=0s Nov 13 03:59:05.464: INFO: watch delete seen for pod-submit-status-2-11 Nov 13 03:59:05.464: INFO: Pod pod-submit-status-2-11 on node node1 timings total=2.246676678s t=966ms run=0s execute=0s Nov 13 03:59:05.887: INFO: watch delete seen for pod-submit-status-0-14 Nov 13 03:59:05.887: INFO: Pod pod-submit-status-0-14 on node node1 timings total=4.530989319s t=763ms run=0s execute=0s Nov 13 03:59:07.803: INFO: watch delete seen for pod-submit-status-2-12 Nov 13 03:59:07.803: INFO: Pod pod-submit-status-2-12 on node node2 timings total=2.339379517s t=575ms run=0s execute=0s Nov 13 03:59:10.401: INFO: watch delete seen for pod-submit-status-2-13 Nov 13 03:59:10.401: INFO: Pod pod-submit-status-2-13 on node node2 timings total=2.597855015s t=399ms run=0s execute=0s Nov 13 03:59:12.801: INFO: watch delete seen for pod-submit-status-1-8 Nov 13 03:59:12.801: INFO: Pod pod-submit-status-1-8 on node node2 timings total=9.576768591s t=284ms run=0s execute=0s Nov 13 03:59:16.846: INFO: watch delete seen for pod-submit-status-1-9 Nov 13 03:59:16.846: INFO: Pod pod-submit-status-1-9 on node node1 timings total=4.044718149s t=290ms run=0s execute=0s Nov 13 03:59:22.402: INFO: watch delete seen for pod-submit-status-1-10 Nov 13 03:59:22.403: INFO: Pod pod-submit-status-1-10 on node node2 timings total=5.556259622s t=699ms run=0s execute=0s Nov 13 03:59:31.453: INFO: watch delete seen for pod-submit-status-1-11 Nov 13 03:59:31.453: INFO: Pod pod-submit-status-1-11 on node node2 timings total=9.050647391s t=1.922s run=3s execute=0s Nov 13 03:59:51.448: INFO: watch delete seen for pod-submit-status-1-12 Nov 13 03:59:51.448: INFO: Pod pod-submit-status-1-12 on node node2 timings total=19.994349746s t=648ms run=0s execute=0s Nov 13 03:59:54.878: INFO: watch delete seen for pod-submit-status-2-14 Nov 13 03:59:54.878: INFO: Pod pod-submit-status-2-14 on node node1 timings total=44.47679731s t=806ms run=0s execute=0s Nov 13 04:00:01.362: INFO: watch delete seen for pod-submit-status-1-13 Nov 13 04:00:01.362: INFO: Pod pod-submit-status-1-13 on node node1 timings total=9.913970632s t=1.505s run=0s execute=0s Nov 13 04:00:02.171: INFO: watch delete seen for pod-submit-status-1-14 Nov 13 04:00:02.171: INFO: Pod pod-submit-status-1-14 on node node1 timings total=809.281115ms t=721ms run=0s execute=0s [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:00:02.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3011" for this suite. • [SLOW TEST:178.783 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:200 should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pod Container Status should never report success for a pending container","total":-1,"completed":1,"skipped":60,"failed":0} Nov 13 04:00:02.181: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:59:38.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: waiting for pod running STEP: deleting the pod gracefully STEP: verifying the pod is running while in the graceful period termination Nov 13 04:00:04.822: INFO: pod is running [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:00:04.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-7927" for this suite. • [SLOW TEST:26.095 seconds] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 ------------------------------ {"msg":"PASSED [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process","total":-1,"completed":13,"skipped":1298,"failed":0} Nov 13 04:00:04.833: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:59:17.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237 STEP: Creating pod busybox-0b78a2e1-8da2-4668-87fc-9da5c8fe982e in namespace container-probe-1951 Nov 13 03:59:21.983: INFO: Started pod busybox-0b78a2e1-8da2-4668-87fc-9da5c8fe982e in namespace container-probe-1951 Nov 13 03:59:21.983: INFO: pod container-probe-1951/busybox-0b78a2e1-8da2-4668-87fc-9da5c8fe982e is not ready (896ns elapsed) Nov 13 03:59:23.983: INFO: pod container-probe-1951/busybox-0b78a2e1-8da2-4668-87fc-9da5c8fe982e is not ready (2.000151904s elapsed) Nov 13 03:59:25.985: INFO: pod container-probe-1951/busybox-0b78a2e1-8da2-4668-87fc-9da5c8fe982e is not ready (4.001368364s elapsed) Nov 13 03:59:27.986: INFO: pod container-probe-1951/busybox-0b78a2e1-8da2-4668-87fc-9da5c8fe982e is not ready (6.002715598s elapsed) Nov 13 03:59:29.988: INFO: pod container-probe-1951/busybox-0b78a2e1-8da2-4668-87fc-9da5c8fe982e is not ready (8.004337458s elapsed) Nov 13 03:59:31.989: INFO: pod container-probe-1951/busybox-0b78a2e1-8da2-4668-87fc-9da5c8fe982e is not ready (10.005264987s elapsed) Nov 13 03:59:33.991: INFO: pod container-probe-1951/busybox-0b78a2e1-8da2-4668-87fc-9da5c8fe982e is not ready (12.008205902s elapsed) Nov 13 03:59:35.992: INFO: pod container-probe-1951/busybox-0b78a2e1-8da2-4668-87fc-9da5c8fe982e is not ready (14.008337309s elapsed) Nov 13 03:59:37.993: INFO: pod container-probe-1951/busybox-0b78a2e1-8da2-4668-87fc-9da5c8fe982e is not ready (16.009562872s elapsed) Nov 13 03:59:39.994: INFO: pod container-probe-1951/busybox-0b78a2e1-8da2-4668-87fc-9da5c8fe982e is not ready (18.011110159s elapsed) Nov 13 03:59:41.996: INFO: pod container-probe-1951/busybox-0b78a2e1-8da2-4668-87fc-9da5c8fe982e is not ready (20.012254456s elapsed) Nov 13 03:59:43.997: INFO: pod container-probe-1951/busybox-0b78a2e1-8da2-4668-87fc-9da5c8fe982e is not ready (22.01323853s elapsed) Nov 13 03:59:45.997: INFO: pod container-probe-1951/busybox-0b78a2e1-8da2-4668-87fc-9da5c8fe982e is not ready (24.013618009s elapsed) Nov 13 03:59:47.998: INFO: pod container-probe-1951/busybox-0b78a2e1-8da2-4668-87fc-9da5c8fe982e is not ready (26.01512502s elapsed) Nov 13 03:59:50.000: INFO: pod container-probe-1951/busybox-0b78a2e1-8da2-4668-87fc-9da5c8fe982e is not ready (28.016728515s elapsed) Nov 13 03:59:52.001: INFO: pod container-probe-1951/busybox-0b78a2e1-8da2-4668-87fc-9da5c8fe982e is not ready (30.017535594s elapsed) Nov 13 03:59:54.002: INFO: pod container-probe-1951/busybox-0b78a2e1-8da2-4668-87fc-9da5c8fe982e is not ready (32.019023968s elapsed) Nov 13 03:59:56.004: INFO: pod container-probe-1951/busybox-0b78a2e1-8da2-4668-87fc-9da5c8fe982e is not ready (34.020384139s elapsed) Nov 13 03:59:58.004: INFO: pod container-probe-1951/busybox-0b78a2e1-8da2-4668-87fc-9da5c8fe982e is not ready (36.020883125s elapsed) Nov 13 04:00:00.006: INFO: pod container-probe-1951/busybox-0b78a2e1-8da2-4668-87fc-9da5c8fe982e is not ready (38.022338846s elapsed) Nov 13 04:00:02.006: INFO: pod container-probe-1951/busybox-0b78a2e1-8da2-4668-87fc-9da5c8fe982e is not ready (40.023166422s elapsed) Nov 13 04:00:04.009: INFO: pod container-probe-1951/busybox-0b78a2e1-8da2-4668-87fc-9da5c8fe982e is not ready (42.025899595s elapsed) Nov 13 04:00:06.011: INFO: pod container-probe-1951/busybox-0b78a2e1-8da2-4668-87fc-9da5c8fe982e is not ready (44.027382941s elapsed) Nov 13 04:00:08.016: INFO: pod container-probe-1951/busybox-0b78a2e1-8da2-4668-87fc-9da5c8fe982e is not ready (46.032582382s elapsed) Nov 13 04:00:10.019: INFO: pod container-probe-1951/busybox-0b78a2e1-8da2-4668-87fc-9da5c8fe982e is not ready (48.036101956s elapsed) Nov 13 04:00:12.020: INFO: pod container-probe-1951/busybox-0b78a2e1-8da2-4668-87fc-9da5c8fe982e is not ready (50.036468244s elapsed) Nov 13 04:00:14.022: INFO: pod container-probe-1951/busybox-0b78a2e1-8da2-4668-87fc-9da5c8fe982e is not ready (52.0392055s elapsed) Nov 13 04:00:16.024: INFO: pod container-probe-1951/busybox-0b78a2e1-8da2-4668-87fc-9da5c8fe982e is not ready (54.040550292s elapsed) Nov 13 04:00:18.027: INFO: pod container-probe-1951/busybox-0b78a2e1-8da2-4668-87fc-9da5c8fe982e is not ready (56.043681162s elapsed) Nov 13 04:00:20.033: INFO: pod container-probe-1951/busybox-0b78a2e1-8da2-4668-87fc-9da5c8fe982e is not ready (58.050048423s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:00:22.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1951" for this suite. • [SLOW TEST:64.106 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237 ------------------------------ {"msg":"PASSED [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":8,"skipped":1205,"failed":0} Nov 13 04:00:22.052: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:57:03.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods W1113 03:57:03.499301 29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 03:57:03.499: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 03:57:03.501: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:681 Nov 13 03:57:03.519: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:57:05.522: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:57:07.523: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:57:09.523: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:57:11.523: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:57:13.523: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:57:15.524: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:57:17.523: INFO: The status of Pod pod-back-off-image is Running (Ready = true) STEP: getting restart delay-0 Nov 13 03:58:19.535: INFO: getRestartDelay: restartCount = 3, finishedAt=2021-11-13 03:57:51 +0000 UTC restartedAt=2021-11-13 03:58:18 +0000 UTC (27s) STEP: getting restart delay-1 Nov 13 03:59:06.720: INFO: getRestartDelay: restartCount = 4, finishedAt=2021-11-13 03:58:23 +0000 UTC restartedAt=2021-11-13 03:59:05 +0000 UTC (42s) STEP: getting restart delay-2 Nov 13 04:00:36.082: INFO: getRestartDelay: restartCount = 5, finishedAt=2021-11-13 03:59:10 +0000 UTC restartedAt=2021-11-13 04:00:34 +0000 UTC (1m24s) STEP: updating the image Nov 13 04:00:36.594: INFO: Successfully updated pod "pod-back-off-image" STEP: get restart delay after image update Nov 13 04:01:02.660: INFO: getRestartDelay: restartCount = 7, finishedAt=2021-11-13 04:00:45 +0000 UTC restartedAt=2021-11-13 04:01:01 +0000 UTC (16s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:01:02.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8072" for this suite. • [SLOW TEST:239.193 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:681 ------------------------------ {"msg":"PASSED [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]","total":-1,"completed":1,"skipped":82,"failed":0} Nov 13 04:01:02.671: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:57:03.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W1113 03:57:03.225809 27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 03:57:03.226: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 03:57:03.229: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:342 STEP: Creating pod startup-76463057-0955-4681-bccf-1ac4895c21b8 in namespace container-probe-2133 Nov 13 03:57:15.252: INFO: Started pod startup-76463057-0955-4681-bccf-1ac4895c21b8 in namespace container-probe-2133 STEP: checking the pod's current state and verifying that restartCount is present Nov 13 03:57:15.255: INFO: Initial restart count of pod startup-76463057-0955-4681-bccf-1ac4895c21b8 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:01:15.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2133" for this suite. • [SLOW TEST:252.579 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:342 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it","total":-1,"completed":1,"skipped":6,"failed":0} Nov 13 04:01:15.778: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:57:35.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 STEP: wait until node is ready Nov 13 03:57:35.072: INFO: Waiting up to 5m0s for node node1 condition Ready to be true STEP: wait until there is node lease STEP: verify NodeStatus report period is longer than lease duration Nov 13 03:57:36.083: INFO: node status heartbeat is unchanged for 1.00339249s, waiting for 1m20s Nov 13 03:57:37.085: INFO: node status heartbeat is unchanged for 2.005426324s, waiting for 1m20s Nov 13 03:57:38.084: INFO: node status heartbeat is unchanged for 3.004157129s, waiting for 1m20s Nov 13 03:57:39.085: INFO: node status heartbeat is unchanged for 4.005762106s, waiting for 1m20s Nov 13 03:57:40.086: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 03:57:40.091: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:57:29 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:57:39 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:57:29 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:57:39 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:57:29 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:57:39 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 13 03:57:41.084: INFO: node status heartbeat is unchanged for 998.58217ms, waiting for 1m20s Nov 13 03:57:42.083: INFO: node status heartbeat is unchanged for 1.99764425s, waiting for 1m20s Nov 13 03:57:43.084: INFO: node status heartbeat is unchanged for 2.998117245s, waiting for 1m20s Nov 13 03:57:44.083: INFO: node status heartbeat is unchanged for 3.997803674s, waiting for 1m20s Nov 13 03:57:45.084: INFO: node status heartbeat is unchanged for 4.998428664s, waiting for 1m20s Nov 13 03:57:46.084: INFO: node status heartbeat is unchanged for 5.998019421s, waiting for 1m20s Nov 13 03:57:47.086: INFO: node status heartbeat is unchanged for 7.00059011s, waiting for 1m20s Nov 13 03:57:48.084: INFO: node status heartbeat is unchanged for 7.99838132s, waiting for 1m20s Nov 13 03:57:49.086: INFO: node status heartbeat is unchanged for 9.000890169s, waiting for 1m20s Nov 13 03:57:50.087: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 03:57:50.092: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:57:39 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:57:49 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:57:39 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:57:49 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:57:39 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:57:49 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 13 03:57:51.083: INFO: node status heartbeat is unchanged for 995.698125ms, waiting for 1m20s Nov 13 03:57:52.084: INFO: node status heartbeat is unchanged for 1.996495956s, waiting for 1m20s Nov 13 03:57:53.084: INFO: node status heartbeat is unchanged for 2.996959644s, waiting for 1m20s Nov 13 03:57:54.086: INFO: node status heartbeat is unchanged for 3.99937314s, waiting for 1m20s Nov 13 03:57:55.084: INFO: node status heartbeat is unchanged for 4.997402637s, waiting for 1m20s Nov 13 03:57:56.084: INFO: node status heartbeat is unchanged for 5.997123161s, waiting for 1m20s Nov 13 03:57:57.084: INFO: node status heartbeat is unchanged for 6.997116212s, waiting for 1m20s Nov 13 03:57:58.084: INFO: node status heartbeat is unchanged for 7.996548083s, waiting for 1m20s Nov 13 03:57:59.084: INFO: node status heartbeat is unchanged for 8.997123078s, waiting for 1m20s Nov 13 03:58:00.084: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 03:58:00.089: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:57:49 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:57:59 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:57:49 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:57:59 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:57:49 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:57:59 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 13 03:58:01.086: INFO: node status heartbeat is unchanged for 1.001944656s, waiting for 1m20s Nov 13 03:58:02.084: INFO: node status heartbeat is unchanged for 1.999775224s, waiting for 1m20s Nov 13 03:58:03.085: INFO: node status heartbeat is unchanged for 3.001499699s, waiting for 1m20s Nov 13 03:58:04.084: INFO: node status heartbeat is unchanged for 4.000253041s, waiting for 1m20s Nov 13 03:58:05.085: INFO: node status heartbeat is unchanged for 5.001136171s, waiting for 1m20s Nov 13 03:58:06.083: INFO: node status heartbeat is unchanged for 5.999200079s, waiting for 1m20s Nov 13 03:58:07.085: INFO: node status heartbeat is unchanged for 7.001386089s, waiting for 1m20s Nov 13 03:58:08.084: INFO: node status heartbeat is unchanged for 7.999822094s, waiting for 1m20s Nov 13 03:58:09.086: INFO: node status heartbeat is unchanged for 9.002098377s, waiting for 1m20s Nov 13 03:58:10.086: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 03:58:10.090: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:57:59 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:58:09 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:57:59 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:58:09 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:57:59 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:58:09 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 13 03:58:11.084: INFO: node status heartbeat is unchanged for 998.681283ms, waiting for 1m20s Nov 13 03:58:12.084: INFO: node status heartbeat is unchanged for 1.998259315s, waiting for 1m20s Nov 13 03:58:13.083: INFO: node status heartbeat is unchanged for 2.998004756s, waiting for 1m20s Nov 13 03:58:14.085: INFO: node status heartbeat is unchanged for 3.999452058s, waiting for 1m20s Nov 13 03:58:15.085: INFO: node status heartbeat is unchanged for 4.99975118s, waiting for 1m20s Nov 13 03:58:16.084: INFO: node status heartbeat is unchanged for 5.998487353s, waiting for 1m20s Nov 13 03:58:17.085: INFO: node status heartbeat is unchanged for 6.999410193s, waiting for 1m20s Nov 13 03:58:18.084: INFO: node status heartbeat is unchanged for 7.998395008s, waiting for 1m20s Nov 13 03:58:19.085: INFO: node status heartbeat is unchanged for 8.999619855s, waiting for 1m20s Nov 13 03:58:20.085: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 03:58:20.089: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:58:09 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:58:19 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:58:09 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:58:19 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:58:09 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:58:19 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 13 03:58:21.087: INFO: node status heartbeat is unchanged for 1.002107974s, waiting for 1m20s Nov 13 03:58:22.086: INFO: node status heartbeat is unchanged for 2.001463913s, waiting for 1m20s Nov 13 03:58:23.084: INFO: node status heartbeat is unchanged for 2.999364462s, waiting for 1m20s Nov 13 03:58:24.084: INFO: node status heartbeat is unchanged for 3.999832633s, waiting for 1m20s Nov 13 03:58:25.084: INFO: node status heartbeat is unchanged for 4.999944954s, waiting for 1m20s Nov 13 03:58:26.084: INFO: node status heartbeat is unchanged for 5.999118888s, waiting for 1m20s Nov 13 03:58:27.086: INFO: node status heartbeat is unchanged for 7.001104952s, waiting for 1m20s Nov 13 03:58:28.084: INFO: node status heartbeat is unchanged for 7.999889038s, waiting for 1m20s Nov 13 03:58:29.085: INFO: node status heartbeat is unchanged for 9.000355285s, waiting for 1m20s Nov 13 03:58:30.084: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 03:58:30.089: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:58:19 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:58:29 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:58:19 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:58:29 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:58:19 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:58:29 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 13 03:58:31.086: INFO: node status heartbeat is unchanged for 1.00175043s, waiting for 1m20s Nov 13 03:58:32.086: INFO: node status heartbeat is unchanged for 2.001750206s, waiting for 1m20s Nov 13 03:58:33.085: INFO: node status heartbeat is unchanged for 3.000808264s, waiting for 1m20s Nov 13 03:58:34.084: INFO: node status heartbeat is unchanged for 3.999539504s, waiting for 1m20s Nov 13 03:58:35.084: INFO: node status heartbeat is unchanged for 4.999843342s, waiting for 1m20s Nov 13 03:58:36.084: INFO: node status heartbeat is unchanged for 6.000147994s, waiting for 1m20s Nov 13 03:58:37.083: INFO: node status heartbeat is unchanged for 6.999217478s, waiting for 1m20s Nov 13 03:58:38.084: INFO: node status heartbeat is unchanged for 7.999644631s, waiting for 1m20s Nov 13 03:58:39.083: INFO: node status heartbeat is unchanged for 8.998610389s, waiting for 1m20s Nov 13 03:58:40.086: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 03:58:40.091: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:58:29 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:58:39 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:58:29 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:58:39 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:58:29 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:58:39 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 13 03:58:41.084: INFO: node status heartbeat is unchanged for 997.389708ms, waiting for 1m20s Nov 13 03:58:42.085: INFO: node status heartbeat is unchanged for 1.999000996s, waiting for 1m20s Nov 13 03:58:43.084: INFO: node status heartbeat is unchanged for 2.997950167s, waiting for 1m20s Nov 13 03:58:44.085: INFO: node status heartbeat is unchanged for 3.998628322s, waiting for 1m20s Nov 13 03:58:45.085: INFO: node status heartbeat is unchanged for 4.999004267s, waiting for 1m20s Nov 13 03:58:46.085: INFO: node status heartbeat is unchanged for 5.999052312s, waiting for 1m20s Nov 13 03:58:47.085: INFO: node status heartbeat is unchanged for 6.998700729s, waiting for 1m20s Nov 13 03:58:48.085: INFO: node status heartbeat is unchanged for 7.998492819s, waiting for 1m20s Nov 13 03:58:49.086: INFO: node status heartbeat is unchanged for 8.999507691s, waiting for 1m20s Nov 13 03:58:50.084: INFO: node status heartbeat is unchanged for 9.998001381s, waiting for 1m20s Nov 13 03:58:51.085: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s Nov 13 03:58:51.089: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:58:39 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:58:50 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:58:39 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:58:50 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:58:39 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:58:50 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 13 03:58:52.085: INFO: node status heartbeat is unchanged for 1.000696946s, waiting for 1m20s Nov 13 03:58:53.083: INFO: node status heartbeat is unchanged for 1.998940057s, waiting for 1m20s Nov 13 03:58:54.086: INFO: node status heartbeat is unchanged for 3.001046015s, waiting for 1m20s Nov 13 03:58:55.085: INFO: node status heartbeat is unchanged for 4.000506942s, waiting for 1m20s Nov 13 03:58:56.085: INFO: node status heartbeat is unchanged for 5.000986609s, waiting for 1m20s Nov 13 03:58:57.085: INFO: node status heartbeat is unchanged for 6.000285688s, waiting for 1m20s Nov 13 03:58:58.084: INFO: node status heartbeat is unchanged for 6.999097131s, waiting for 1m20s Nov 13 03:58:59.083: INFO: node status heartbeat is unchanged for 7.998751631s, waiting for 1m20s Nov 13 03:59:00.085: INFO: node status heartbeat is unchanged for 9.000229261s, waiting for 1m20s Nov 13 03:59:01.085: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 03:59:01.090: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:58:50 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:59:00 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:58:50 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:59:00 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:58:50 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:59:00 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 13 03:59:02.083: INFO: node status heartbeat is unchanged for 997.619642ms, waiting for 1m20s Nov 13 03:59:03.084: INFO: node status heartbeat is unchanged for 1.998932193s, waiting for 1m20s Nov 13 03:59:04.084: INFO: node status heartbeat is unchanged for 2.99844473s, waiting for 1m20s Nov 13 03:59:05.084: INFO: node status heartbeat is unchanged for 3.998315262s, waiting for 1m20s Nov 13 03:59:06.083: INFO: node status heartbeat is unchanged for 4.998107853s, waiting for 1m20s Nov 13 03:59:07.084: INFO: node status heartbeat is unchanged for 5.998943165s, waiting for 1m20s Nov 13 03:59:08.084: INFO: node status heartbeat is unchanged for 6.998851032s, waiting for 1m20s Nov 13 03:59:09.086: INFO: node status heartbeat is unchanged for 8.000624656s, waiting for 1m20s Nov 13 03:59:10.085: INFO: node status heartbeat is unchanged for 8.999563442s, waiting for 1m20s Nov 13 03:59:11.085: INFO: node status heartbeat is unchanged for 10.000091276s, waiting for 1m20s Nov 13 03:59:12.084: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s Nov 13 03:59:12.089: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:59:00 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:59:11 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:59:00 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:59:11 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:59:00 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:59:11 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 13 03:59:13.085: INFO: node status heartbeat is unchanged for 1.000870125s, waiting for 1m20s Nov 13 03:59:14.084: INFO: node status heartbeat is unchanged for 2.000231328s, waiting for 1m20s Nov 13 03:59:15.083: INFO: node status heartbeat is unchanged for 2.999333313s, waiting for 1m20s Nov 13 03:59:16.085: INFO: node status heartbeat is unchanged for 4.000827501s, waiting for 1m20s Nov 13 03:59:17.086: INFO: node status heartbeat is unchanged for 5.001996008s, waiting for 1m20s Nov 13 03:59:18.083: INFO: node status heartbeat is unchanged for 5.999522804s, waiting for 1m20s Nov 13 03:59:19.083: INFO: node status heartbeat is unchanged for 6.999278108s, waiting for 1m20s Nov 13 03:59:20.083: INFO: node status heartbeat is unchanged for 7.999190195s, waiting for 1m20s Nov 13 03:59:21.086: INFO: node status heartbeat is unchanged for 9.002444462s, waiting for 1m20s Nov 13 03:59:22.085: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 03:59:22.090: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:59:11 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:59:21 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:59:11 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:59:21 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:59:11 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:59:21 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 13 03:59:23.084: INFO: node status heartbeat is unchanged for 999.226166ms, waiting for 1m20s Nov 13 03:59:24.085: INFO: node status heartbeat is unchanged for 1.999897774s, waiting for 1m20s Nov 13 03:59:25.084: INFO: node status heartbeat is unchanged for 2.999132379s, waiting for 1m20s Nov 13 03:59:26.084: INFO: node status heartbeat is unchanged for 3.999501453s, waiting for 1m20s Nov 13 03:59:27.083: INFO: node status heartbeat is unchanged for 4.99852552s, waiting for 1m20s Nov 13 03:59:28.084: INFO: node status heartbeat is unchanged for 5.999357838s, waiting for 1m20s Nov 13 03:59:29.087: INFO: node status heartbeat is unchanged for 7.002471979s, waiting for 1m20s Nov 13 03:59:30.083: INFO: node status heartbeat is unchanged for 7.998571068s, waiting for 1m20s Nov 13 03:59:31.083: INFO: node status heartbeat is unchanged for 8.998202358s, waiting for 1m20s Nov 13 03:59:32.084: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 03:59:32.089: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:59:21 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:59:31 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:59:21 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:59:31 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:59:21 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:59:31 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 13 03:59:33.083: INFO: node status heartbeat is unchanged for 998.77824ms, waiting for 1m20s Nov 13 03:59:34.084: INFO: node status heartbeat is unchanged for 1.999667632s, waiting for 1m20s Nov 13 03:59:35.083: INFO: node status heartbeat is unchanged for 2.998979205s, waiting for 1m20s Nov 13 03:59:36.085: INFO: node status heartbeat is unchanged for 4.000953697s, waiting for 1m20s Nov 13 03:59:37.084: INFO: node status heartbeat is unchanged for 5.00050487s, waiting for 1m20s Nov 13 03:59:38.084: INFO: node status heartbeat is unchanged for 5.999874811s, waiting for 1m20s Nov 13 03:59:39.084: INFO: node status heartbeat is unchanged for 6.999965608s, waiting for 1m20s Nov 13 03:59:40.083: INFO: node status heartbeat is unchanged for 7.999604814s, waiting for 1m20s Nov 13 03:59:41.083: INFO: node status heartbeat is unchanged for 8.9995707s, waiting for 1m20s Nov 13 03:59:42.087: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 03:59:42.092: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:59:31 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:59:41 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:59:31 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:59:41 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:59:31 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:59:41 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 13 03:59:43.083: INFO: node status heartbeat is unchanged for 995.648336ms, waiting for 1m20s Nov 13 03:59:44.084: INFO: node status heartbeat is unchanged for 1.997124983s, waiting for 1m20s Nov 13 03:59:45.084: INFO: node status heartbeat is unchanged for 2.997307254s, waiting for 1m20s Nov 13 03:59:46.083: INFO: node status heartbeat is unchanged for 3.996277986s, waiting for 1m20s Nov 13 03:59:47.085: INFO: node status heartbeat is unchanged for 4.997364921s, waiting for 1m20s Nov 13 03:59:48.086: INFO: node status heartbeat is unchanged for 5.99846169s, waiting for 1m20s Nov 13 03:59:49.086: INFO: node status heartbeat is unchanged for 6.998600095s, waiting for 1m20s Nov 13 03:59:50.085: INFO: node status heartbeat is unchanged for 7.998245899s, waiting for 1m20s Nov 13 03:59:51.087: INFO: node status heartbeat is unchanged for 8.999574984s, waiting for 1m20s Nov 13 03:59:52.084: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 03:59:52.089: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:59:41 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:59:51 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:59:41 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:59:51 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:59:41 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:59:51 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 13 03:59:53.083: INFO: node status heartbeat is unchanged for 999.637199ms, waiting for 1m20s Nov 13 03:59:54.087: INFO: node status heartbeat is unchanged for 2.00345069s, waiting for 1m20s Nov 13 03:59:55.088: INFO: node status heartbeat is unchanged for 3.004332509s, waiting for 1m20s Nov 13 03:59:56.085: INFO: node status heartbeat is unchanged for 4.001321395s, waiting for 1m20s Nov 13 03:59:57.084: INFO: node status heartbeat is unchanged for 5.000311569s, waiting for 1m20s Nov 13 03:59:58.085: INFO: node status heartbeat is unchanged for 6.001142492s, waiting for 1m20s Nov 13 03:59:59.085: INFO: node status heartbeat is unchanged for 7.000665194s, waiting for 1m20s Nov 13 04:00:00.084: INFO: node status heartbeat is unchanged for 7.999928229s, waiting for 1m20s Nov 13 04:00:01.085: INFO: node status heartbeat is unchanged for 9.001108359s, waiting for 1m20s Nov 13 04:00:02.084: INFO: node status heartbeat is unchanged for 10.000503717s, waiting for 1m20s Nov 13 04:00:03.084: INFO: node status heartbeat is unchanged for 11.000580591s, waiting for 1m20s Nov 13 04:00:04.086: INFO: node status heartbeat changed in 12s (with other status changes), waiting for 40s Nov 13 04:00:04.091: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:59:51 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:00:03 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:59:51 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:00:03 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 03:59:51 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:00:03 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 13 04:00:05.083: INFO: node status heartbeat is unchanged for 997.017047ms, waiting for 1m20s Nov 13 04:00:06.083: INFO: node status heartbeat is unchanged for 1.996530759s, waiting for 1m20s Nov 13 04:00:07.084: INFO: node status heartbeat is unchanged for 2.997776844s, waiting for 1m20s Nov 13 04:00:08.084: INFO: node status heartbeat is unchanged for 3.99819275s, waiting for 1m20s Nov 13 04:00:09.085: INFO: node status heartbeat is unchanged for 4.99916524s, waiting for 1m20s Nov 13 04:00:10.084: INFO: node status heartbeat is unchanged for 5.997978255s, waiting for 1m20s Nov 13 04:00:11.084: INFO: node status heartbeat is unchanged for 6.997385672s, waiting for 1m20s Nov 13 04:00:12.084: INFO: node status heartbeat is unchanged for 7.99725337s, waiting for 1m20s Nov 13 04:00:13.083: INFO: node status heartbeat is unchanged for 8.996362983s, waiting for 1m20s Nov 13 04:00:14.085: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:00:14.090: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:00:03 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:00:13 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:00:03 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:00:13 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:00:03 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:00:13 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 13 04:00:15.084: INFO: node status heartbeat is unchanged for 998.654177ms, waiting for 1m20s Nov 13 04:00:16.084: INFO: node status heartbeat is unchanged for 1.998949365s, waiting for 1m20s Nov 13 04:00:17.084: INFO: node status heartbeat is unchanged for 2.999141855s, waiting for 1m20s Nov 13 04:00:18.084: INFO: node status heartbeat is unchanged for 3.999025941s, waiting for 1m20s Nov 13 04:00:19.083: INFO: node status heartbeat is unchanged for 4.99828048s, waiting for 1m20s Nov 13 04:00:20.085: INFO: node status heartbeat is unchanged for 5.999512023s, waiting for 1m20s Nov 13 04:00:21.084: INFO: node status heartbeat is unchanged for 6.998653204s, waiting for 1m20s Nov 13 04:00:22.083: INFO: node status heartbeat is unchanged for 7.99738007s, waiting for 1m20s Nov 13 04:00:23.084: INFO: node status heartbeat is unchanged for 8.999357869s, waiting for 1m20s Nov 13 04:00:24.083: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:00:24.088: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:00:13 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:00:23 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:00:13 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:00:23 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:00:13 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:00:23 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 13 04:00:25.084: INFO: node status heartbeat is unchanged for 1.000595479s, waiting for 1m20s Nov 13 04:00:26.084: INFO: node status heartbeat is unchanged for 2.00106148s, waiting for 1m20s Nov 13 04:00:27.083: INFO: node status heartbeat is unchanged for 2.999514828s, waiting for 1m20s Nov 13 04:00:28.084: INFO: node status heartbeat is unchanged for 4.000948375s, waiting for 1m20s Nov 13 04:00:29.085: INFO: node status heartbeat is unchanged for 5.001524754s, waiting for 1m20s Nov 13 04:00:30.085: INFO: node status heartbeat is unchanged for 6.002016864s, waiting for 1m20s Nov 13 04:00:31.084: INFO: node status heartbeat is unchanged for 7.00060638s, waiting for 1m20s Nov 13 04:00:32.084: INFO: node status heartbeat is unchanged for 8.001062227s, waiting for 1m20s Nov 13 04:00:33.084: INFO: node status heartbeat is unchanged for 9.000932831s, waiting for 1m20s Nov 13 04:00:34.083: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:00:34.088: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:00:23 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:00:33 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:00:23 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:00:33 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:00:23 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:00:33 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 13 04:00:35.084: INFO: node status heartbeat is unchanged for 1.000779657s, waiting for 1m20s Nov 13 04:00:36.084: INFO: node status heartbeat is unchanged for 2.001381865s, waiting for 1m20s Nov 13 04:00:37.085: INFO: node status heartbeat is unchanged for 3.002229415s, waiting for 1m20s Nov 13 04:00:38.085: INFO: node status heartbeat is unchanged for 4.001925081s, waiting for 1m20s Nov 13 04:00:39.087: INFO: node status heartbeat is unchanged for 5.004187999s, waiting for 1m20s Nov 13 04:00:40.086: INFO: node status heartbeat is unchanged for 6.003437243s, waiting for 1m20s Nov 13 04:00:41.086: INFO: node status heartbeat is unchanged for 7.003235776s, waiting for 1m20s Nov 13 04:00:42.085: INFO: node status heartbeat is unchanged for 8.001855135s, waiting for 1m20s Nov 13 04:00:43.085: INFO: node status heartbeat is unchanged for 9.001613958s, waiting for 1m20s Nov 13 04:00:44.083: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:00:44.088: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:00:33 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:00:43 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:00:33 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:00:43 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:00:33 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:00:43 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 13 04:00:45.086: INFO: node status heartbeat is unchanged for 1.002701182s, waiting for 1m20s Nov 13 04:00:46.085: INFO: node status heartbeat is unchanged for 2.002014776s, waiting for 1m20s Nov 13 04:00:47.086: INFO: node status heartbeat is unchanged for 3.002463157s, waiting for 1m20s Nov 13 04:00:48.084: INFO: node status heartbeat is unchanged for 4.001163457s, waiting for 1m20s Nov 13 04:00:49.086: INFO: node status heartbeat is unchanged for 5.003048762s, waiting for 1m20s Nov 13 04:00:50.085: INFO: node status heartbeat is unchanged for 6.001602985s, waiting for 1m20s Nov 13 04:00:51.084: INFO: node status heartbeat is unchanged for 7.001000737s, waiting for 1m20s Nov 13 04:00:52.085: INFO: node status heartbeat is unchanged for 8.001403676s, waiting for 1m20s Nov 13 04:00:53.083: INFO: node status heartbeat is unchanged for 9.000114062s, waiting for 1m20s Nov 13 04:00:54.087: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:00:54.092: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:00:43 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:00:53 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:00:43 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:00:53 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:00:43 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:00:53 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 13 04:00:55.086: INFO: node status heartbeat is unchanged for 998.783555ms, waiting for 1m20s Nov 13 04:00:56.087: INFO: node status heartbeat is unchanged for 2.000296999s, waiting for 1m20s Nov 13 04:00:57.085: INFO: node status heartbeat is unchanged for 2.997402253s, waiting for 1m20s Nov 13 04:00:58.086: INFO: node status heartbeat is unchanged for 3.998996658s, waiting for 1m20s Nov 13 04:00:59.087: INFO: node status heartbeat is unchanged for 4.999423342s, waiting for 1m20s Nov 13 04:01:00.088: INFO: node status heartbeat is unchanged for 6.001217709s, waiting for 1m20s Nov 13 04:01:01.087: INFO: node status heartbeat is unchanged for 6.999816788s, waiting for 1m20s Nov 13 04:01:02.085: INFO: node status heartbeat is unchanged for 7.997953473s, waiting for 1m20s Nov 13 04:01:03.084: INFO: node status heartbeat is unchanged for 8.996774612s, waiting for 1m20s Nov 13 04:01:04.083: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:01:04.088: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:00:53 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:01:03 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:00:53 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:01:03 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:00:53 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:01:03 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 13 04:01:05.086: INFO: node status heartbeat is unchanged for 1.003103543s, waiting for 1m20s Nov 13 04:01:06.084: INFO: node status heartbeat is unchanged for 2.000735481s, waiting for 1m20s Nov 13 04:01:07.086: INFO: node status heartbeat is unchanged for 3.002477939s, waiting for 1m20s Nov 13 04:01:08.085: INFO: node status heartbeat is unchanged for 4.001636431s, waiting for 1m20s Nov 13 04:01:09.087: INFO: node status heartbeat is unchanged for 5.004139001s, waiting for 1m20s Nov 13 04:01:10.086: INFO: node status heartbeat is unchanged for 6.002268578s, waiting for 1m20s Nov 13 04:01:11.084: INFO: node status heartbeat is unchanged for 7.000458228s, waiting for 1m20s Nov 13 04:01:12.085: INFO: node status heartbeat is unchanged for 8.001208628s, waiting for 1m20s Nov 13 04:01:13.083: INFO: node status heartbeat is unchanged for 9.000143581s, waiting for 1m20s Nov 13 04:01:14.086: INFO: node status heartbeat is unchanged for 10.002223182s, waiting for 1m20s Nov 13 04:01:15.086: INFO: node status heartbeat is unchanged for 11.003114304s, waiting for 1m20s Nov 13 04:01:16.085: INFO: node status heartbeat changed in 12s (with other status changes), waiting for 40s Nov 13 04:01:16.090: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:01:03 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:01:15 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:01:03 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:01:15 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:01:03 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:01:15 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 13 04:01:17.086: INFO: node status heartbeat is unchanged for 1.000439854s, waiting for 1m20s Nov 13 04:01:18.085: INFO: node status heartbeat is unchanged for 1.999536094s, waiting for 1m20s Nov 13 04:01:19.086: INFO: node status heartbeat is unchanged for 3.000933956s, waiting for 1m20s Nov 13 04:01:20.085: INFO: node status heartbeat is unchanged for 3.999837457s, waiting for 1m20s Nov 13 04:01:21.085: INFO: node status heartbeat is unchanged for 4.999390368s, waiting for 1m20s Nov 13 04:01:22.085: INFO: node status heartbeat is unchanged for 5.999280784s, waiting for 1m20s Nov 13 04:01:23.083: INFO: node status heartbeat is unchanged for 6.997830995s, waiting for 1m20s Nov 13 04:01:24.084: INFO: node status heartbeat is unchanged for 7.998881628s, waiting for 1m20s Nov 13 04:01:25.085: INFO: node status heartbeat is unchanged for 8.999483159s, waiting for 1m20s Nov 13 04:01:26.087: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:01:26.092: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:01:15 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:01:25 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:01:15 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:01:25 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:01:15 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:01:25 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 13 04:01:27.086: INFO: node status heartbeat is unchanged for 999.207643ms, waiting for 1m20s Nov 13 04:01:28.084: INFO: node status heartbeat is unchanged for 1.997053028s, waiting for 1m20s Nov 13 04:01:29.086: INFO: node status heartbeat is unchanged for 2.999649921s, waiting for 1m20s Nov 13 04:01:30.085: INFO: node status heartbeat is unchanged for 3.998014381s, waiting for 1m20s Nov 13 04:01:31.085: INFO: node status heartbeat is unchanged for 4.998012874s, waiting for 1m20s Nov 13 04:01:32.084: INFO: node status heartbeat is unchanged for 5.997438151s, waiting for 1m20s Nov 13 04:01:33.084: INFO: node status heartbeat is unchanged for 6.997453699s, waiting for 1m20s Nov 13 04:01:34.083: INFO: node status heartbeat is unchanged for 7.996616265s, waiting for 1m20s Nov 13 04:01:35.112: INFO: node status heartbeat is unchanged for 9.025089885s, waiting for 1m20s Nov 13 04:01:36.085: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:01:36.090: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:01:25 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:01:35 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:01:25 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:01:35 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:01:25 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:01:35 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 13 04:01:37.085: INFO: node status heartbeat is unchanged for 999.924317ms, waiting for 1m20s Nov 13 04:01:38.083: INFO: node status heartbeat is unchanged for 1.998442827s, waiting for 1m20s Nov 13 04:01:39.084: INFO: node status heartbeat is unchanged for 2.998668313s, waiting for 1m20s Nov 13 04:01:40.083: INFO: node status heartbeat is unchanged for 3.998596475s, waiting for 1m20s Nov 13 04:01:41.083: INFO: node status heartbeat is unchanged for 4.998407419s, waiting for 1m20s Nov 13 04:01:42.085: INFO: node status heartbeat is unchanged for 6.000363564s, waiting for 1m20s Nov 13 04:01:43.084: INFO: node status heartbeat is unchanged for 6.999036056s, waiting for 1m20s Nov 13 04:01:44.086: INFO: node status heartbeat is unchanged for 8.001423584s, waiting for 1m20s Nov 13 04:01:45.086: INFO: node status heartbeat is unchanged for 9.001568705s, waiting for 1m20s Nov 13 04:01:46.085: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:01:46.090: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:01:35 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:01:45 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:01:35 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:01:45 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:01:35 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:01:45 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 13 04:01:47.086: INFO: node status heartbeat is unchanged for 1.000399711s, waiting for 1m20s Nov 13 04:01:48.084: INFO: node status heartbeat is unchanged for 1.999155483s, waiting for 1m20s Nov 13 04:01:49.086: INFO: node status heartbeat is unchanged for 3.000760855s, waiting for 1m20s Nov 13 04:01:50.085: INFO: node status heartbeat is unchanged for 3.999446228s, waiting for 1m20s Nov 13 04:01:51.085: INFO: node status heartbeat is unchanged for 4.999440986s, waiting for 1m20s Nov 13 04:01:52.084: INFO: node status heartbeat is unchanged for 5.999318693s, waiting for 1m20s Nov 13 04:01:53.084: INFO: node status heartbeat is unchanged for 6.999002251s, waiting for 1m20s Nov 13 04:01:54.084: INFO: node status heartbeat is unchanged for 7.999059461s, waiting for 1m20s Nov 13 04:01:55.085: INFO: node status heartbeat is unchanged for 8.999519839s, waiting for 1m20s Nov 13 04:01:56.086: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:01:56.091: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:01:45 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:01:55 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:01:45 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:01:55 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:01:45 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:01:55 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 13 04:01:57.084: INFO: node status heartbeat is unchanged for 997.761949ms, waiting for 1m20s Nov 13 04:01:58.086: INFO: node status heartbeat is unchanged for 2.000311463s, waiting for 1m20s Nov 13 04:01:59.085: INFO: node status heartbeat is unchanged for 2.998983944s, waiting for 1m20s Nov 13 04:02:00.084: INFO: node status heartbeat is unchanged for 3.998516143s, waiting for 1m20s Nov 13 04:02:01.084: INFO: node status heartbeat is unchanged for 4.998408733s, waiting for 1m20s Nov 13 04:02:02.084: INFO: node status heartbeat is unchanged for 5.998703419s, waiting for 1m20s Nov 13 04:02:03.085: INFO: node status heartbeat is unchanged for 6.999053435s, waiting for 1m20s Nov 13 04:02:04.085: INFO: node status heartbeat is unchanged for 7.999422588s, waiting for 1m20s Nov 13 04:02:05.084: INFO: node status heartbeat is unchanged for 8.998508613s, waiting for 1m20s Nov 13 04:02:06.084: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:02:06.088: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:01:55 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:02:05 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:01:55 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:02:05 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:01:55 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:02:05 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 13 04:02:07.083: INFO: node status heartbeat is unchanged for 999.128822ms, waiting for 1m20s Nov 13 04:02:08.084: INFO: node status heartbeat is unchanged for 1.999821494s, waiting for 1m20s Nov 13 04:02:09.084: INFO: node status heartbeat is unchanged for 3.000430479s, waiting for 1m20s Nov 13 04:02:10.085: INFO: node status heartbeat is unchanged for 4.000809006s, waiting for 1m20s Nov 13 04:02:11.083: INFO: node status heartbeat is unchanged for 4.999575558s, waiting for 1m20s Nov 13 04:02:12.085: INFO: node status heartbeat is unchanged for 6.000994985s, waiting for 1m20s Nov 13 04:02:13.085: INFO: node status heartbeat is unchanged for 7.001289159s, waiting for 1m20s Nov 13 04:02:14.085: INFO: node status heartbeat is unchanged for 8.001079743s, waiting for 1m20s Nov 13 04:02:15.085: INFO: node status heartbeat is unchanged for 9.000711598s, waiting for 1m20s Nov 13 04:02:16.084: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:02:16.088: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:02:05 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:02:15 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:02:05 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:02:15 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:02:05 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:02:15 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 13 04:02:17.085: INFO: node status heartbeat is unchanged for 1.001195006s, waiting for 1m20s Nov 13 04:02:18.084: INFO: node status heartbeat is unchanged for 2.000304418s, waiting for 1m20s Nov 13 04:02:19.086: INFO: node status heartbeat is unchanged for 3.002690044s, waiting for 1m20s Nov 13 04:02:20.087: INFO: node status heartbeat is unchanged for 4.003230998s, waiting for 1m20s Nov 13 04:02:21.086: INFO: node status heartbeat is unchanged for 5.002967304s, waiting for 1m20s Nov 13 04:02:22.085: INFO: node status heartbeat is unchanged for 6.001496949s, waiting for 1m20s Nov 13 04:02:23.085: INFO: node status heartbeat is unchanged for 7.001332316s, waiting for 1m20s Nov 13 04:02:24.086: INFO: node status heartbeat is unchanged for 8.002946866s, waiting for 1m20s Nov 13 04:02:25.085: INFO: node status heartbeat is unchanged for 9.001969788s, waiting for 1m20s Nov 13 04:02:26.086: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:02:26.090: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:02:15 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:02:25 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:02:15 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:02:25 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:02:15 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:02:25 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 13 04:02:27.086: INFO: node status heartbeat is unchanged for 1.000108113s, waiting for 1m20s Nov 13 04:02:28.085: INFO: node status heartbeat is unchanged for 1.999957973s, waiting for 1m20s Nov 13 04:02:29.085: INFO: node status heartbeat is unchanged for 2.999469425s, waiting for 1m20s Nov 13 04:02:30.087: INFO: node status heartbeat is unchanged for 4.001933179s, waiting for 1m20s Nov 13 04:02:31.086: INFO: node status heartbeat is unchanged for 5.000112463s, waiting for 1m20s Nov 13 04:02:32.084: INFO: node status heartbeat is unchanged for 5.998556027s, waiting for 1m20s Nov 13 04:02:33.084: INFO: node status heartbeat is unchanged for 6.998113798s, waiting for 1m20s Nov 13 04:02:34.085: INFO: node status heartbeat is unchanged for 7.999108489s, waiting for 1m20s Nov 13 04:02:35.085: INFO: node status heartbeat is unchanged for 8.999194466s, waiting for 1m20s Nov 13 04:02:35.088: INFO: node status heartbeat is unchanged for 9.002909744s, waiting for 1m20s STEP: verify node is still in ready status even though node status report is infrequent [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:02:35.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-9074" for this suite. • [SLOW TEST:300.057 seconds] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when the NodeLease feature is enabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:49 the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently","total":-1,"completed":4,"skipped":856,"failed":0} Nov 13 04:02:35.108: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:58:39.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 STEP: Creating pod liveness-cd8e59eb-e221-479a-9355-c984e5aa681f in namespace container-probe-5017 Nov 13 03:58:43.128: INFO: Started pod liveness-cd8e59eb-e221-479a-9355-c984e5aa681f in namespace container-probe-5017 STEP: checking the pod's current state and verifying that restartCount is present Nov 13 03:58:43.130: INFO: Initial restart count of pod liveness-cd8e59eb-e221-479a-9355-c984e5aa681f is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:02:43.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5017" for this suite. • [SLOW TEST:244.581 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe","total":-1,"completed":4,"skipped":313,"failed":0} Nov 13 04:02:43.671: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:58:05.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:722 Nov 13 03:58:05.797: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:58:07.802: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:58:09.802: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:58:11.800: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:58:13.802: INFO: The status of Pod back-off-cap is Running (Ready = true) STEP: getting restart delay when capped Nov 13 04:09:51.272: INFO: getRestartDelay: restartCount = 7, finishedAt=2021-11-13 04:04:45 +0000 UTC restartedAt=2021-11-13 04:09:50 +0000 UTC (5m5s) Nov 13 04:14:59.600: INFO: getRestartDelay: restartCount = 8, finishedAt=2021-11-13 04:09:55 +0000 UTC restartedAt=2021-11-13 04:14:58 +0000 UTC (5m3s) Nov 13 04:15:04.620: INFO: Container's last state is not "Terminated". Nov 13 04:15:05.623: INFO: Container's last state is not "Terminated". Nov 13 04:15:06.629: INFO: Container's last state is not "Terminated". Nov 13 04:15:07.633: INFO: Container's last state is not "Terminated". Nov 13 04:15:08.637: INFO: Container's last state is not "Terminated". Nov 13 04:15:09.640: INFO: Container's last state is not "Terminated". Nov 13 04:15:10.644: INFO: Container's last state is not "Terminated". Nov 13 04:15:11.647: INFO: Container's last state is not "Terminated". Nov 13 04:15:12.651: INFO: Container's last state is not "Terminated". Nov 13 04:15:13.655: INFO: Container's last state is not "Terminated". Nov 13 04:15:14.659: INFO: Container's last state is not "Terminated". Nov 13 04:15:15.663: INFO: Container's last state is not "Terminated". Nov 13 04:20:19.999: INFO: getRestartDelay: restartCount = 9, finishedAt=2021-11-13 04:15:03 +0000 UTC restartedAt=2021-11-13 04:20:18 +0000 UTC (5m15s) STEP: getting restart delay after a capped delay Nov 13 04:25:29.393: INFO: getRestartDelay: restartCount = 10, finishedAt=2021-11-13 04:20:23 +0000 UTC restartedAt=2021-11-13 04:25:27 +0000 UTC (5m4s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:25:29.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4162" for this suite. • [SLOW TEST:1643.642 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:722 ------------------------------ {"msg":"PASSED [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]","total":-1,"completed":3,"skipped":460,"failed":0} Nov 13 04:25:29.408: INFO: Running AfterSuite actions on all nodes Nov 13 03:59:44.234: INFO: Running AfterSuite actions on all nodes Nov 13 04:25:29.453: INFO: Running AfterSuite actions on node 1 Nov 13 04:25:29.453: INFO: Skipping dumping logs from cluster Summarizing 1 Failure: [Panic!] [sig-node] Probing container [It] should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 Ran 53 of 5770 Specs in 1706.531 seconds FAIL! -- 52 Passed | 1 Failed | 0 Pending | 5717 Skipped Ginkgo ran 1 suite in 28m28.077938781s Test Suite Failed