Running Suite: Kubernetes e2e suite =================================== Random Seed: 1636160797 - Will randomize all specs Will run 5770 specs Running in parallel across 10 nodes Nov 6 01:06:38.865: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:06:38.867: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Nov 6 01:06:38.895: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 6 01:06:38.961: INFO: The status of Pod cmk-init-discover-node1-nnkks is Succeeded, skipping waiting Nov 6 01:06:38.961: INFO: The status of Pod cmk-init-discover-node2-9svdd is Succeeded, skipping waiting Nov 6 01:06:38.961: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 6 01:06:38.961: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Nov 6 01:06:38.961: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Nov 6 01:06:38.978: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Nov 6 01:06:38.978: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Nov 6 01:06:38.978: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Nov 6 01:06:38.978: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Nov 6 01:06:38.978: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Nov 6 01:06:38.978: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Nov 6 01:06:38.978: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Nov 6 01:06:38.978: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Nov 6 01:06:38.978: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Nov 6 01:06:38.978: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Nov 6 01:06:38.978: INFO: e2e test version: v1.21.5 Nov 6 01:06:38.979: INFO: kube-apiserver version: v1.21.1 Nov 6 01:06:38.979: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:06:38.985: INFO: Cluster IP family: ipv4 SSSSSSSS ------------------------------ Nov 6 01:06:38.982: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:06:39.002: INFO: Cluster IP family: ipv4 SSSSSSSSSS ------------------------------ Nov 6 01:06:38.992: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:06:39.014: INFO: Cluster IP family: ipv4 SSS ------------------------------ Nov 6 01:06:38.996: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:06:39.017: INFO: Cluster IP family: ipv4 SSSSSS ------------------------------ Nov 6 01:06:39.000: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:06:39.021: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Nov 6 01:06:39.008: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:06:39.032: INFO: Cluster IP family: ipv4 Nov 6 01:06:39.015: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:06:39.033: INFO: Cluster IP family: ipv4 SSSSSSS ------------------------------ Nov 6 01:06:39.014: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:06:39.036: INFO: Cluster IP family: ipv4 SSSSSSSSSSSS ------------------------------ Nov 6 01:06:39.018: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:06:39.041: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSS ------------------------------ Nov 6 01:06:39.027: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:06:39.049: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:06:39.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap W1106 01:06:39.108548 24 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 6 01:06:39.108: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 6 01:06:39.110: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should update ConfigMap successfully /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/configmap.go:140 STEP: Creating ConfigMap configmap-4492/configmap-test-7516f25f-e4ee-4c99-8c5f-57a44a880ffc STEP: Updating configMap configmap-4492/configmap-test-7516f25f-e4ee-4c99-8c5f-57a44a880ffc STEP: Verifying update of ConfigMap configmap-4492/configmap-test-7516f25f-e4ee-4c99-8c5f-57a44a880ffc [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:06:39.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4492" for this suite. •SSSSSS ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":1,"skipped":36,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:06:39.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 Nov 6 01:06:39.344: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:06:39.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-9664" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 should enforce an AppArmor profile [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:43 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:284 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:06:39.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api W1106 01:06:39.088678 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 6 01:06:39.088: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 6 01:06:39.090: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109 STEP: Creating a pod to test downward api env vars Nov 6 01:06:39.106: INFO: Waiting up to 5m0s for pod "downward-api-ee600c0c-a68b-49f6-b330-a0f05fd86782" in namespace "downward-api-9289" to be "Succeeded or Failed" Nov 6 01:06:39.109: INFO: Pod "downward-api-ee600c0c-a68b-49f6-b330-a0f05fd86782": Phase="Pending", Reason="", readiness=false. Elapsed: 2.858348ms Nov 6 01:06:41.113: INFO: Pod "downward-api-ee600c0c-a68b-49f6-b330-a0f05fd86782": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00616518s Nov 6 01:06:43.118: INFO: Pod "downward-api-ee600c0c-a68b-49f6-b330-a0f05fd86782": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011504469s Nov 6 01:06:45.122: INFO: Pod "downward-api-ee600c0c-a68b-49f6-b330-a0f05fd86782": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015685209s Nov 6 01:06:47.127: INFO: Pod "downward-api-ee600c0c-a68b-49f6-b330-a0f05fd86782": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020771875s Nov 6 01:06:49.131: INFO: Pod "downward-api-ee600c0c-a68b-49f6-b330-a0f05fd86782": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.024649154s STEP: Saw pod success Nov 6 01:06:49.131: INFO: Pod "downward-api-ee600c0c-a68b-49f6-b330-a0f05fd86782" satisfied condition "Succeeded or Failed" Nov 6 01:06:49.134: INFO: Trying to get logs from node node2 pod downward-api-ee600c0c-a68b-49f6-b330-a0f05fd86782 container dapi-container: STEP: delete the pod Nov 6 01:06:49.520: INFO: Waiting for pod downward-api-ee600c0c-a68b-49f6-b330-a0f05fd86782 to disappear Nov 6 01:06:49.522: INFO: Pod downward-api-ee600c0c-a68b-49f6-b330-a0f05fd86782 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:06:49.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9289" for this suite. • [SLOW TEST:10.465 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":1,"skipped":5,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:06:39.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples W1106 01:06:39.185777 27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 6 01:06:39.185: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 6 01:06:39.187: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Nov 6 01:06:39.196: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that prints his name and namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:134 STEP: creating the pod Nov 6 01:06:39.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-7360 create -f -' Nov 6 01:06:39.758: INFO: stderr: "" Nov 6 01:06:39.758: INFO: stdout: "pod/dapi-test-pod created\n" STEP: checking if name and namespace were passed correctly Nov 6 01:06:53.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-7360 logs dapi-test-pod test-container' Nov 6 01:06:54.394: INFO: stderr: "" Nov 6 01:06:54.394: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.233.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-7360\nMY_POD_IP=10.244.3.226\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.207\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" Nov 6 01:06:54.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-7360 logs dapi-test-pod test-container' Nov 6 01:06:54.627: INFO: stderr: "" Nov 6 01:06:54.627: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.233.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-7360\nMY_POD_IP=10.244.3.226\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.207\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:06:54.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-7360" for this suite. • [SLOW TEST:15.475 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:133 should create a pod that prints his name and namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:134 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Downward API should create a pod that prints his name and namespace","total":-1,"completed":1,"skipped":29,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:06:39.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context W1106 01:06:39.628884 29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 6 01:06:39.629: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 6 01:06:39.630: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Nov 6 01:06:39.644: INFO: Waiting up to 5m0s for pod "security-context-622c60c3-da42-4a31-bd67-1c4f8840105e" in namespace "security-context-758" to be "Succeeded or Failed" Nov 6 01:06:39.646: INFO: Pod "security-context-622c60c3-da42-4a31-bd67-1c4f8840105e": Phase="Pending", Reason="", readiness=false. Elapsed: 1.857516ms Nov 6 01:06:41.649: INFO: Pod "security-context-622c60c3-da42-4a31-bd67-1c4f8840105e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005328531s Nov 6 01:06:43.654: INFO: Pod "security-context-622c60c3-da42-4a31-bd67-1c4f8840105e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01017343s Nov 6 01:06:45.658: INFO: Pod "security-context-622c60c3-da42-4a31-bd67-1c4f8840105e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014530115s Nov 6 01:06:47.663: INFO: Pod "security-context-622c60c3-da42-4a31-bd67-1c4f8840105e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018675834s Nov 6 01:06:49.666: INFO: Pod "security-context-622c60c3-da42-4a31-bd67-1c4f8840105e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.022626922s Nov 6 01:06:51.672: INFO: Pod "security-context-622c60c3-da42-4a31-bd67-1c4f8840105e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.028332356s Nov 6 01:06:53.677: INFO: Pod "security-context-622c60c3-da42-4a31-bd67-1c4f8840105e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.033516655s Nov 6 01:06:55.681: INFO: Pod "security-context-622c60c3-da42-4a31-bd67-1c4f8840105e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.037553124s STEP: Saw pod success Nov 6 01:06:55.681: INFO: Pod "security-context-622c60c3-da42-4a31-bd67-1c4f8840105e" satisfied condition "Succeeded or Failed" Nov 6 01:06:55.684: INFO: Trying to get logs from node node1 pod security-context-622c60c3-da42-4a31-bd67-1c4f8840105e container test-container: STEP: delete the pod Nov 6 01:06:55.696: INFO: Waiting for pod security-context-622c60c3-da42-4a31-bd67-1c4f8840105e to disappear Nov 6 01:06:55.697: INFO: Pod security-context-622c60c3-da42-4a31-bd67-1c4f8840105e no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:06:55.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-758" for this suite. • [SLOW TEST:16.099 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]","total":-1,"completed":1,"skipped":236,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:06:39.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:06:56.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1473" for this suite. • [SLOW TEST:17.120 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":2,"skipped":169,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:06:56.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] should have OwnerReferences set /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:88 [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:06:56.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-8713" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":3,"skipped":224,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:06:39.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context W1106 01:06:39.412502 38 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 6 01:06:39.412: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 6 01:06:39.414: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Nov 6 01:06:39.427: INFO: Waiting up to 5m0s for pod "security-context-538929ff-0d5c-43b9-9826-cd387ecef8ce" in namespace "security-context-2450" to be "Succeeded or Failed" Nov 6 01:06:39.429: INFO: Pod "security-context-538929ff-0d5c-43b9-9826-cd387ecef8ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.428449ms Nov 6 01:06:41.433: INFO: Pod "security-context-538929ff-0d5c-43b9-9826-cd387ecef8ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006431067s Nov 6 01:06:43.438: INFO: Pod "security-context-538929ff-0d5c-43b9-9826-cd387ecef8ce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011148889s Nov 6 01:06:45.442: INFO: Pod "security-context-538929ff-0d5c-43b9-9826-cd387ecef8ce": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015428124s Nov 6 01:06:47.445: INFO: Pod "security-context-538929ff-0d5c-43b9-9826-cd387ecef8ce": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018529183s Nov 6 01:06:49.450: INFO: Pod "security-context-538929ff-0d5c-43b9-9826-cd387ecef8ce": Phase="Pending", Reason="", readiness=false. Elapsed: 10.02329174s Nov 6 01:06:51.455: INFO: Pod "security-context-538929ff-0d5c-43b9-9826-cd387ecef8ce": Phase="Pending", Reason="", readiness=false. Elapsed: 12.028006967s Nov 6 01:06:53.459: INFO: Pod "security-context-538929ff-0d5c-43b9-9826-cd387ecef8ce": Phase="Pending", Reason="", readiness=false. Elapsed: 14.032491263s Nov 6 01:06:55.466: INFO: Pod "security-context-538929ff-0d5c-43b9-9826-cd387ecef8ce": Phase="Pending", Reason="", readiness=false. Elapsed: 16.038650885s Nov 6 01:06:57.470: INFO: Pod "security-context-538929ff-0d5c-43b9-9826-cd387ecef8ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.04278504s STEP: Saw pod success Nov 6 01:06:57.470: INFO: Pod "security-context-538929ff-0d5c-43b9-9826-cd387ecef8ce" satisfied condition "Succeeded or Failed" Nov 6 01:06:57.472: INFO: Trying to get logs from node node2 pod security-context-538929ff-0d5c-43b9-9826-cd387ecef8ce container test-container: STEP: delete the pod Nov 6 01:06:57.498: INFO: Waiting for pod security-context-538929ff-0d5c-43b9-9826-cd387ecef8ce to disappear Nov 6 01:06:57.499: INFO: Pod security-context-538929ff-0d5c-43b9-9826-cd387ecef8ce no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:06:57.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-2450" for this suite. • [SLOW TEST:18.116 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]","total":-1,"completed":1,"skipped":131,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:06:40.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context W1106 01:06:40.083916 36 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 6 01:06:40.084: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 6 01:06:40.085: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Nov 6 01:06:40.100: INFO: Waiting up to 5m0s for pod "security-context-660ee7fd-240b-416a-896d-174d37facdca" in namespace "security-context-8390" to be "Succeeded or Failed" Nov 6 01:06:40.103: INFO: Pod "security-context-660ee7fd-240b-416a-896d-174d37facdca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.740566ms Nov 6 01:06:42.107: INFO: Pod "security-context-660ee7fd-240b-416a-896d-174d37facdca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007667736s Nov 6 01:06:44.111: INFO: Pod "security-context-660ee7fd-240b-416a-896d-174d37facdca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011419843s Nov 6 01:06:46.116: INFO: Pod "security-context-660ee7fd-240b-416a-896d-174d37facdca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015967176s Nov 6 01:06:48.119: INFO: Pod "security-context-660ee7fd-240b-416a-896d-174d37facdca": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018904403s Nov 6 01:06:50.123: INFO: Pod "security-context-660ee7fd-240b-416a-896d-174d37facdca": Phase="Pending", Reason="", readiness=false. Elapsed: 10.023241372s Nov 6 01:06:52.131: INFO: Pod "security-context-660ee7fd-240b-416a-896d-174d37facdca": Phase="Pending", Reason="", readiness=false. Elapsed: 12.031655516s Nov 6 01:06:54.135: INFO: Pod "security-context-660ee7fd-240b-416a-896d-174d37facdca": Phase="Pending", Reason="", readiness=false. Elapsed: 14.034756162s Nov 6 01:06:56.139: INFO: Pod "security-context-660ee7fd-240b-416a-896d-174d37facdca": Phase="Pending", Reason="", readiness=false. Elapsed: 16.038746193s Nov 6 01:06:58.142: INFO: Pod "security-context-660ee7fd-240b-416a-896d-174d37facdca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.042486996s STEP: Saw pod success Nov 6 01:06:58.142: INFO: Pod "security-context-660ee7fd-240b-416a-896d-174d37facdca" satisfied condition "Succeeded or Failed" Nov 6 01:06:58.145: INFO: Trying to get logs from node node2 pod security-context-660ee7fd-240b-416a-896d-174d37facdca container test-container: STEP: delete the pod Nov 6 01:06:58.158: INFO: Waiting for pod security-context-660ee7fd-240b-416a-896d-174d37facdca to disappear Nov 6 01:06:58.160: INFO: Pod security-context-660ee7fd-240b-416a-896d-174d37facdca no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:06:58.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-8390" for this suite. • [SLOW TEST:18.105 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":1,"skipped":466,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:06:49.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:06:59.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1565" for this suite. • [SLOW TEST:9.086 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":2,"skipped":224,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:06:39.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods W1106 01:06:39.273464 31 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 6 01:06:39.273: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 6 01:06:39.275: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:53 [It] should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Nov 6 01:07:02.341: INFO: start=2021-11-06 01:06:57.307359636 +0000 UTC m=+19.959093356, now=2021-11-06 01:07:02.341910542 +0000 UTC m=+24.993644286, kubelet pod: {"metadata":{"name":"pod-submit-remove-a0a36758-fb81-406f-aa6f-0f3ac8d9c081","namespace":"pods-1964","uid":"4b727986-a502-4c83-9e34-da21dd368560","resourceVersion":"83056","creationTimestamp":"2021-11-06T01:06:39Z","deletionTimestamp":"2021-11-06T01:07:27Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"278078073"},"annotations":{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.221\"\n ],\n \"mac\": \"a6:0c:d9:96:38:fd\",\n \"default\": true,\n \"dns\": {}\n}]","k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.221\"\n ],\n \"mac\": \"a6:0c:d9:96:38:fd\",\n \"default\": true,\n \"dns\": {}\n}]","kubernetes.io/config.seen":"2021-11-06T01:06:39.291227536Z","kubernetes.io/config.source":"api","kubernetes.io/psp":"collectd"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-11-06T01:06:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"kube-api-access-dmtmw","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","args":["pause"],"resources":{},"volumeMounts":[{"name":"kube-api-access-dmtmw","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"node1","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-11-06T01:06:39Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-11-06T01:06:52Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-11-06T01:06:52Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-11-06T01:06:39Z"}],"hostIP":"10.10.190.207","podIP":"10.244.3.221","podIPs":[{"ip":"10.244.3.221"}],"startTime":"2021-11-06T01:06:39Z","containerStatuses":[{"name":"agnhost-container","state":{"running":{"startedAt":"2021-11-06T01:06:50Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","imageID":"docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1","containerID":"docker://4a53b380d7c46d5c9bc2a915b733c3540bfdc7127b31cb1fc1ca0fcb42dc9e9b","started":true}],"qosClass":"BestEffort"}} Nov 6 01:07:07.332: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:07:07.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1964" for this suite. • [SLOW TEST:28.092 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:51 should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Delete Grace Period should be submitted and removed","total":-1,"completed":1,"skipped":73,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:07:07.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crictl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:33 Nov 6 01:07:07.402: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:07:07.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crictl-9110" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should be able to run crictl on the node [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:40 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:35 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:07:07.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ssh STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:36 Nov 6 01:07:07.551: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:07:07.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ssh-8954" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should SSH to all nodes and run commands [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:42 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:06:39.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods W1106 01:06:39.061521 32 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 6 01:06:39.061: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 6 01:06:39.065: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777 STEP: submitting the pod to kubernetes STEP: patching pod status with condition "k8s.io/test-condition1" to true STEP: patching pod status with condition "k8s.io/test-condition2" to true Nov 6 01:07:02.109: INFO: Expect the Ready condition of pod "pod-ready" to be true, but got false STEP: patching pod status with condition "k8s.io/test-condition1" to false Nov 6 01:07:04.118: INFO: Expect the Ready condition of pod "pod-ready" to be false, but got true Nov 6 01:07:05.118: INFO: Expect the Ready condition of pod "pod-ready" to be false, but got true Nov 6 01:07:06.119: INFO: Expect the Ready condition of pod "pod-ready" to be false, but got true Nov 6 01:07:07.121: INFO: Expect the Ready condition of pod "pod-ready" to be false, but got true [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:07:08.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-844" for this suite. • [SLOW TEST:29.097 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777 ------------------------------ {"msg":"PASSED [sig-node] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":1,"skipped":3,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:07:07.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-privileged-pod STEP: Waiting for a default service account to be provisioned in namespace [It] should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49 STEP: Creating a pod with a privileged container Nov 6 01:07:07.685: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:07:09.688: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:07:11.690: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:07:13.689: INFO: The status of Pod privileged-pod is Running (Ready = true) STEP: Executing in the privileged container Nov 6 01:07:13.692: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-1461 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:13.692: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:14.163: INFO: ExecWithOptions {Command:[ip link del dummy1] Namespace:e2e-privileged-pod-1461 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:14.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Executing in the non-privileged container Nov 6 01:07:14.260: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-1461 PodName:privileged-pod ContainerName:not-privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:14.260: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:07:14.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-privileged-pod-1461" for this suite. • [SLOW TEST:6.703 seconds] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49 ------------------------------ {"msg":"PASSED [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":2,"skipped":198,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:07:08.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Nov 6 01:07:14.760: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:07:14.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8889" for this suite. • [SLOW TEST:6.078 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":2,"skipped":326,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:06:39.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop W1106 01:06:39.092924 34 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 6 01:06:39.093: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 6 01:06:39.095: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: waiting for pod running STEP: deleting the pod gracefully STEP: verifying the pod is running while in the graceful period termination Nov 6 01:07:15.152: INFO: pod is running [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:07:15.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-116" for this suite. • [SLOW TEST:36.094 seconds] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 ------------------------------ {"msg":"PASSED [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process","total":-1,"completed":1,"skipped":16,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:07:14.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:07:18.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5913" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":3,"skipped":405,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:07:19.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should create and update a lease in the kube-node-lease namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:50 STEP: check that lease for this Kubelet exists in the kube-node-lease namespace STEP: check that node lease is updated at least once within the lease duration [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:07:19.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-4834" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":4,"skipped":418,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:07:19.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 Nov 6 01:07:19.148: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:07:19.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-87" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.036 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 can disable an AppArmor profile, using unconfined [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:47 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:284 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:06:39.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet W1106 01:06:39.182203 26 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 6 01:06:39.182: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 6 01:06:39.184: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:274 [BeforeEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:295 [It] kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 STEP: Creating a RC of 20 pods and wait until all pods of this RC are running STEP: creating replication controller cleanup20-f5e2f3a3-ce3f-4f94-8ff4-40c68a46d5e5 in namespace kubelet-9181 I1106 01:06:39.216690 26 runners.go:190] Created replication controller with name: cleanup20-f5e2f3a3-ce3f-4f94-8ff4-40c68a46d5e5, namespace: kubelet-9181, replica count: 20 I1106 01:06:49.268257 26 runners.go:190] cleanup20-f5e2f3a3-ce3f-4f94-8ff4-40c68a46d5e5 Pods: 20 out of 20 created, 0 running, 20 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1106 01:06:59.269487 26 runners.go:190] cleanup20-f5e2f3a3-ce3f-4f94-8ff4-40c68a46d5e5 Pods: 20 out of 20 created, 20 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 6 01:07:00.270: INFO: Checking pods on node node2 via /runningpods endpoint Nov 6 01:07:00.270: INFO: Checking pods on node node1 via /runningpods endpoint Nov 6 01:07:00.304: INFO: Resource usage on node "node2" is not ready yet Nov 6 01:07:00.304: INFO: Resource usage on node "master1": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.415 4884.93 1667.04 "runtime" 0.110 681.33 301.43 "kubelet" 0.110 681.33 301.43 Resource usage on node "master2": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.341 3542.16 1469.01 "runtime" 0.095 589.53 235.59 "kubelet" 0.095 589.53 235.59 Resource usage on node "master3": container cpu(cores) memory_working_set(MB) memory_rss(MB) "kubelet" 0.101 556.77 265.04 "/" 0.437 3850.39 1702.74 "runtime" 0.101 556.77 265.04 Resource usage on node "node1": container cpu(cores) memory_working_set(MB) memory_rss(MB) "runtime" 0.139 2437.01 411.96 "kubelet" 0.139 2437.01 411.96 "/" 1.678 6472.30 2394.25 STEP: Deleting the RC STEP: deleting ReplicationController cleanup20-f5e2f3a3-ce3f-4f94-8ff4-40c68a46d5e5 in namespace kubelet-9181, will wait for the garbage collector to delete the pods Nov 6 01:07:00.362: INFO: Deleting ReplicationController cleanup20-f5e2f3a3-ce3f-4f94-8ff4-40c68a46d5e5 took: 4.325767ms Nov 6 01:07:00.962: INFO: Terminating ReplicationController cleanup20-f5e2f3a3-ce3f-4f94-8ff4-40c68a46d5e5 pods took: 600.173563ms Nov 6 01:07:20.664: INFO: Checking pods on node node2 via /runningpods endpoint Nov 6 01:07:20.664: INFO: Checking pods on node node1 via /runningpods endpoint Nov 6 01:07:20.687: INFO: Deleting 20 pods on 2 nodes completed in 1.023667972s after the RC was deleted Nov 6 01:07:20.687: INFO: CPU usage of containers on node "master2" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 0.318 0.318 0.326 0.326 0.326 "runtime" 0.000 0.000 0.086 0.086 0.086 0.086 0.086 "kubelet" 0.000 0.000 0.086 0.086 0.086 0.086 0.086 CPU usage of containers on node "master3" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 0.450 0.450 0.513 0.513 0.513 "runtime" 0.000 0.000 0.101 0.106 0.106 0.106 0.106 "kubelet" 0.000 0.000 0.101 0.106 0.106 0.106 0.106 CPU usage of containers on node "node1" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 1.678 1.969 1.969 1.969 1.969 "runtime" 0.000 0.000 0.139 0.905 0.905 0.905 0.905 "kubelet" 0.000 0.000 0.139 0.905 0.905 0.905 0.905 CPU usage of containers on node "node2" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 0.915 1.689 1.689 1.689 1.689 "runtime" 0.000 0.000 0.687 0.687 0.687 0.687 0.687 "kubelet" 0.000 0.000 0.687 0.687 0.687 0.687 0.687 CPU usage of containers on node "master1" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 0.415 0.415 0.477 0.477 0.477 "runtime" 0.000 0.000 0.110 0.110 0.110 0.110 0.110 "kubelet" 0.000 0.000 0.110 0.110 0.110 0.110 0.110 [AfterEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:326 STEP: removing the label kubelet_cleanup off the node node1 STEP: verifying the node doesn't have the label kubelet_cleanup STEP: removing the label kubelet_cleanup off the node node2 STEP: verifying the node doesn't have the label kubelet_cleanup [AfterEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:07:20.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-9181" for this suite. • [SLOW TEST:41.559 seconds] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:279 kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 ------------------------------ {"msg":"PASSED [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":1,"skipped":47,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:07:14.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Nov 6 01:07:14.444: INFO: Waiting up to 5m0s for pod "security-context-d48f7745-bf81-4484-9d15-2c03581f4f4d" in namespace "security-context-8559" to be "Succeeded or Failed" Nov 6 01:07:14.445: INFO: Pod "security-context-d48f7745-bf81-4484-9d15-2c03581f4f4d": Phase="Pending", Reason="", readiness=false. Elapsed: 1.723631ms Nov 6 01:07:16.450: INFO: Pod "security-context-d48f7745-bf81-4484-9d15-2c03581f4f4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005905957s Nov 6 01:07:18.453: INFO: Pod "security-context-d48f7745-bf81-4484-9d15-2c03581f4f4d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009128859s Nov 6 01:07:20.457: INFO: Pod "security-context-d48f7745-bf81-4484-9d15-2c03581f4f4d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013682393s Nov 6 01:07:22.460: INFO: Pod "security-context-d48f7745-bf81-4484-9d15-2c03581f4f4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.016590537s STEP: Saw pod success Nov 6 01:07:22.460: INFO: Pod "security-context-d48f7745-bf81-4484-9d15-2c03581f4f4d" satisfied condition "Succeeded or Failed" Nov 6 01:07:22.463: INFO: Trying to get logs from node node2 pod security-context-d48f7745-bf81-4484-9d15-2c03581f4f4d container test-container: STEP: delete the pod Nov 6 01:07:22.475: INFO: Waiting for pod security-context-d48f7745-bf81-4484-9d15-2c03581f4f4d to disappear Nov 6 01:07:22.477: INFO: Pod security-context-d48f7745-bf81-4484-9d15-2c03581f4f4d no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:07:22.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-8559" for this suite. • [SLOW TEST:8.075 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":3,"skipped":225,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:06:58.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:449 STEP: Creating pod liveness-override-8c7c6e4f-ad93-4878-b92c-42eed694d624 in namespace container-probe-7732 Nov 6 01:07:12.463: INFO: Started pod liveness-override-8c7c6e4f-ad93-4878-b92c-42eed694d624 in namespace container-probe-7732 STEP: checking the pod's current state and verifying that restartCount is present Nov 6 01:07:12.466: INFO: Initial restart count of pod liveness-override-8c7c6e4f-ad93-4878-b92c-42eed694d624 is 1 Nov 6 01:07:24.489: INFO: Restart count of pod container-probe-7732/liveness-override-8c7c6e4f-ad93-4878-b92c-42eed694d624 is now 2 (12.023205635s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:07:24.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7732" for this suite. • [SLOW TEST:26.081 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:449 ------------------------------ {"msg":"PASSED [sig-node] Probing container should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod]","total":-1,"completed":2,"skipped":602,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:07:15.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Nov 6 01:07:15.322: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 STEP: creating secret and pod Nov 6 01:07:15.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-1378 create -f -' Nov 6 01:07:15.735: INFO: stderr: "" Nov 6 01:07:15.735: INFO: stdout: "secret/test-secret created\n" Nov 6 01:07:15.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-1378 create -f -' Nov 6 01:07:16.062: INFO: stderr: "" Nov 6 01:07:16.062: INFO: stdout: "pod/secret-test-pod created\n" STEP: checking if secret was read correctly Nov 6 01:07:26.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-1378 logs secret-test-pod test-container' Nov 6 01:07:26.514: INFO: stderr: "" Nov 6 01:07:26.515: INFO: stdout: "content of file \"/etc/secret-volume/data-1\": value-1\r\n\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:07:26.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-1378" for this suite. • [SLOW TEST:11.230 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:113 should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Secret should create a pod that reads a secret","total":-1,"completed":2,"skipped":91,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:07:22.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 Nov 6 01:07:22.762: INFO: Waiting up to 5m0s for pod "busybox-user-0-9d9281bf-3929-44e6-9429-0e8f8605e30a" in namespace "security-context-test-6929" to be "Succeeded or Failed" Nov 6 01:07:22.768: INFO: Pod "busybox-user-0-9d9281bf-3929-44e6-9429-0e8f8605e30a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.596732ms Nov 6 01:07:24.771: INFO: Pod "busybox-user-0-9d9281bf-3929-44e6-9429-0e8f8605e30a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008819784s Nov 6 01:07:26.774: INFO: Pod "busybox-user-0-9d9281bf-3929-44e6-9429-0e8f8605e30a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012129137s Nov 6 01:07:28.777: INFO: Pod "busybox-user-0-9d9281bf-3929-44e6-9429-0e8f8605e30a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01529633s Nov 6 01:07:30.781: INFO: Pod "busybox-user-0-9d9281bf-3929-44e6-9429-0e8f8605e30a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019028794s Nov 6 01:07:32.784: INFO: Pod "busybox-user-0-9d9281bf-3929-44e6-9429-0e8f8605e30a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.021739903s Nov 6 01:07:32.784: INFO: Pod "busybox-user-0-9d9281bf-3929-44e6-9429-0e8f8605e30a" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:07:32.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6929" for this suite. • [SLOW TEST:10.067 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsUser /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50 should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":4,"skipped":357,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:07:24.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Nov 6 01:07:24.572: INFO: Waiting up to 5m0s for pod "security-context-909deba6-dd86-479d-9647-d83759369a29" in namespace "security-context-6871" to be "Succeeded or Failed" Nov 6 01:07:24.574: INFO: Pod "security-context-909deba6-dd86-479d-9647-d83759369a29": Phase="Pending", Reason="", readiness=false. Elapsed: 1.918002ms Nov 6 01:07:26.577: INFO: Pod "security-context-909deba6-dd86-479d-9647-d83759369a29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005193641s Nov 6 01:07:28.581: INFO: Pod "security-context-909deba6-dd86-479d-9647-d83759369a29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009043967s Nov 6 01:07:30.586: INFO: Pod "security-context-909deba6-dd86-479d-9647-d83759369a29": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014006986s Nov 6 01:07:32.589: INFO: Pod "security-context-909deba6-dd86-479d-9647-d83759369a29": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016826983s Nov 6 01:07:34.592: INFO: Pod "security-context-909deba6-dd86-479d-9647-d83759369a29": Phase="Pending", Reason="", readiness=false. Elapsed: 10.020231952s Nov 6 01:07:36.597: INFO: Pod "security-context-909deba6-dd86-479d-9647-d83759369a29": Phase="Pending", Reason="", readiness=false. Elapsed: 12.024707808s Nov 6 01:07:38.600: INFO: Pod "security-context-909deba6-dd86-479d-9647-d83759369a29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.028228835s STEP: Saw pod success Nov 6 01:07:38.600: INFO: Pod "security-context-909deba6-dd86-479d-9647-d83759369a29" satisfied condition "Succeeded or Failed" Nov 6 01:07:38.603: INFO: Trying to get logs from node node2 pod security-context-909deba6-dd86-479d-9647-d83759369a29 container test-container: STEP: delete the pod Nov 6 01:07:38.614: INFO: Waiting for pod security-context-909deba6-dd86-479d-9647-d83759369a29 to disappear Nov 6 01:07:38.616: INFO: Pod security-context-909deba6-dd86-479d-9647-d83759369a29 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:07:38.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-6871" for this suite. • [SLOW TEST:14.083 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]","total":-1,"completed":3,"skipped":619,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:07:38.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Nov 6 01:07:38.783: INFO: Waiting up to 5m0s for pod "security-context-c0d6bdeb-a061-448a-8059-43756b5a2039" in namespace "security-context-3751" to be "Succeeded or Failed" Nov 6 01:07:38.787: INFO: Pod "security-context-c0d6bdeb-a061-448a-8059-43756b5a2039": Phase="Pending", Reason="", readiness=false. Elapsed: 3.342041ms Nov 6 01:07:40.792: INFO: Pod "security-context-c0d6bdeb-a061-448a-8059-43756b5a2039": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00832516s Nov 6 01:07:42.796: INFO: Pod "security-context-c0d6bdeb-a061-448a-8059-43756b5a2039": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012508903s STEP: Saw pod success Nov 6 01:07:42.796: INFO: Pod "security-context-c0d6bdeb-a061-448a-8059-43756b5a2039" satisfied condition "Succeeded or Failed" Nov 6 01:07:42.799: INFO: Trying to get logs from node node1 pod security-context-c0d6bdeb-a061-448a-8059-43756b5a2039 container test-container: STEP: delete the pod Nov 6 01:07:42.809: INFO: Waiting for pod security-context-c0d6bdeb-a061-448a-8059-43756b5a2039 to disappear Nov 6 01:07:42.811: INFO: Pod security-context-c0d6bdeb-a061-448a-8059-43756b5a2039 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:07:42.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-3751" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":4,"skipped":684,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:07:33.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should run a Pod requesting a RuntimeClass with scheduling without taints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:125 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a label on the found node. STEP: verifying the node has the label foo-e7bfad40-c8a5-4f5a-b02d-eaadce539045 bar STEP: verifying the node has the label fizz-4d1f69a0-c24f-4595-b160-65408f61c153 buzz STEP: Trying to create runtimeclass and pod STEP: removing the label fizz-4d1f69a0-c24f-4595-b160-65408f61c153 off the node node1 STEP: verifying the node doesn't have the label fizz-4d1f69a0-c24f-4595-b160-65408f61c153 STEP: removing the label foo-e7bfad40-c8a5-4f5a-b02d-eaadce539045 off the node node1 STEP: verifying the node doesn't have the label foo-e7bfad40-c8a5-4f5a-b02d-eaadce539045 [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:07:43.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-9040" for this suite. • [SLOW TEST:10.125 seconds] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should run a Pod requesting a RuntimeClass with scheduling without taints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:125 ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling without taints ","total":-1,"completed":5,"skipped":552,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:07:43.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 Nov 6 01:07:43.778: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-3026" to be "Succeeded or Failed" Nov 6 01:07:43.782: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.383469ms Nov 6 01:07:45.787: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008743923s Nov 6 01:07:47.790: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01210563s Nov 6 01:07:49.793: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015103796s Nov 6 01:07:51.797: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019435739s Nov 6 01:07:53.801: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.023590835s Nov 6 01:07:53.801: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:07:53.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3026" for this suite. • [SLOW TEST:10.068 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":6,"skipped":810,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:07:54.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 Nov 6 01:07:54.040: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-2662" to be "Succeeded or Failed" Nov 6 01:07:54.049: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 9.187325ms Nov 6 01:07:56.053: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013781761s Nov 6 01:07:58.059: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019126802s Nov 6 01:08:00.064: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024609529s Nov 6 01:08:00.064: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:08:00.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2662" for this suite. • [SLOW TEST:6.072 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 ------------------------------ [BeforeEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:06:56.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename mount-propagation STEP: Waiting for a default service account to be provisioned in namespace [It] should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 Nov 6 01:06:56.977: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:06:58.980: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:07:00.981: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:07:02.979: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:07:04.980: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:07:06.980: INFO: The status of Pod master is Running (Ready = true) Nov 6 01:07:06.995: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:07:08.999: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:07:10.999: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:07:12.999: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:07:14.999: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:07:17.000: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:07:18.998: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:07:20.999: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:07:22.999: INFO: The status of Pod slave is Running (Ready = true) Nov 6 01:07:23.015: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:07:25.022: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:07:27.019: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:07:29.018: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:07:31.022: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:07:33.018: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:07:35.021: INFO: The status of Pod private is Running (Ready = true) Nov 6 01:07:35.038: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:07:37.042: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:07:39.040: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:07:41.042: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:07:43.041: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:07:45.044: INFO: The status of Pod default is Running (Ready = true) Nov 6 01:07:45.049: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-9406 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:45.049: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:45.825: INFO: Exec stderr: "" Nov 6 01:07:45.827: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-9406 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:45.827: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:45.962: INFO: Exec stderr: "" Nov 6 01:07:45.965: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-9406 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:45.965: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:46.103: INFO: Exec stderr: "" Nov 6 01:07:46.115: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-9406 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:46.115: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:46.525: INFO: Exec stderr: "" Nov 6 01:07:46.528: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-9406 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:46.528: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:46.860: INFO: Exec stderr: "" Nov 6 01:07:46.862: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-9406 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:46.862: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:47.012: INFO: Exec stderr: "" Nov 6 01:07:47.014: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-9406 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:47.014: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:47.131: INFO: Exec stderr: "" Nov 6 01:07:47.133: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-9406 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:47.133: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:47.224: INFO: Exec stderr: "" Nov 6 01:07:47.227: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-9406 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:47.227: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:47.393: INFO: Exec stderr: "" Nov 6 01:07:47.395: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-9406 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:47.395: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:47.970: INFO: Exec stderr: "" Nov 6 01:07:47.972: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-9406 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:47.972: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:48.071: INFO: Exec stderr: "" Nov 6 01:07:48.074: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-9406 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:48.074: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:48.165: INFO: Exec stderr: "" Nov 6 01:07:48.167: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-9406 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:48.167: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:48.277: INFO: Exec stderr: "" Nov 6 01:07:48.281: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-9406 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:48.281: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:48.374: INFO: Exec stderr: "" Nov 6 01:07:48.376: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-9406 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:48.376: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:48.477: INFO: Exec stderr: "" Nov 6 01:07:48.480: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-9406 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:48.480: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:48.607: INFO: Exec stderr: "" Nov 6 01:07:48.611: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-master /mnt/test/master; echo master > /mnt/test/master/file] Namespace:mount-propagation-9406 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:48.611: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:48.727: INFO: Exec stderr: "" Nov 6 01:07:48.731: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-slave /mnt/test/slave; echo slave > /mnt/test/slave/file] Namespace:mount-propagation-9406 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:48.731: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:48.846: INFO: Exec stderr: "" Nov 6 01:07:48.849: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-private /mnt/test/private; echo private > /mnt/test/private/file] Namespace:mount-propagation-9406 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:48.849: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:48.978: INFO: Exec stderr: "" Nov 6 01:07:48.981: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-default /mnt/test/default; echo default > /mnt/test/default/file] Namespace:mount-propagation-9406 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:48.981: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:49.089: INFO: Exec stderr: "" Nov 6 01:07:57.106: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-9406"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-9406"/host; echo host > "/var/lib/kubelet/mount-propagation-9406"/host/file] Namespace:mount-propagation-9406 PodName:hostexec-node2-pgltk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:07:57.107: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:57.257: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-9406 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:57.257: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:57.379: INFO: pod slave mount master: stdout: "master", stderr: "" error: Nov 6 01:07:57.381: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-9406 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:57.381: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:57.470: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: Nov 6 01:07:57.474: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-9406 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:57.474: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:57.565: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Nov 6 01:07:57.568: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-9406 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:57.568: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:57.681: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Nov 6 01:07:57.684: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-9406 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:57.684: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:57.763: INFO: pod slave mount host: stdout: "host", stderr: "" error: Nov 6 01:07:57.767: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-9406 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:57.767: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:57.890: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 Nov 6 01:07:57.893: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-9406 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:57.893: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:57.967: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Nov 6 01:07:57.969: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-9406 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:57.970: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:58.050: INFO: pod private mount private: stdout: "private", stderr: "" error: Nov 6 01:07:58.053: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-9406 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:58.053: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:58.132: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Nov 6 01:07:58.135: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-9406 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:58.135: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:58.214: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 Nov 6 01:07:58.216: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-9406 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:58.216: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:58.314: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 Nov 6 01:07:58.317: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-9406 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:58.317: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:58.427: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Nov 6 01:07:58.429: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-9406 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:58.430: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:58.630: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Nov 6 01:07:58.632: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-9406 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:58.632: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:58.849: INFO: pod default mount default: stdout: "default", stderr: "" error: Nov 6 01:07:58.851: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-9406 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:58.851: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:59.024: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 Nov 6 01:07:59.025: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-9406 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:59.026: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:59.139: INFO: pod master mount master: stdout: "master", stderr: "" error: Nov 6 01:07:59.141: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-9406 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:59.141: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:59.244: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Nov 6 01:07:59.246: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-9406 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:59.246: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:59.327: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Nov 6 01:07:59.330: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-9406 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:59.330: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:59.407: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Nov 6 01:07:59.409: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-9406 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:59.410: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:59.533: INFO: pod master mount host: stdout: "host", stderr: "" error: Nov 6 01:07:59.533: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test `cat "/var/lib/kubelet/mount-propagation-9406"/master/file` = master] Namespace:mount-propagation-9406 PodName:hostexec-node2-pgltk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:07:59.533: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:59.642: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -e "/var/lib/kubelet/mount-propagation-9406"/slave/file] Namespace:mount-propagation-9406 PodName:hostexec-node2-pgltk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:07:59.642: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:59.741: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/var/lib/kubelet/mount-propagation-9406"/host] Namespace:mount-propagation-9406 PodName:hostexec-node2-pgltk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:07:59.741: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:07:59.878: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/default] Namespace:mount-propagation-9406 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:07:59.878: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:08:00.027: INFO: Exec stderr: "" Nov 6 01:08:00.029: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/private] Namespace:mount-propagation-9406 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:08:00.029: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:08:00.122: INFO: Exec stderr: "" Nov 6 01:08:00.125: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/slave] Namespace:mount-propagation-9406 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:08:00.125: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:08:00.222: INFO: Exec stderr: "" Nov 6 01:08:00.224: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/master] Namespace:mount-propagation-9406 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:08:00.224: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:08:00.337: INFO: Exec stderr: "" Nov 6 01:08:00.337: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -rf "/var/lib/kubelet/mount-propagation-9406"] Namespace:mount-propagation-9406 PodName:hostexec-node2-pgltk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:08:00.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod hostexec-node2-pgltk in namespace mount-propagation-9406 [AfterEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:08:00.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "mount-propagation-9406" for this suite. • [SLOW TEST:63.550 seconds] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 ------------------------------ {"msg":"PASSED [sig-node] Mount propagation should propagate mounts to the host","total":-1,"completed":4,"skipped":325,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":7,"skipped":911,"failed":0} [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:08:00.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:08:06.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5375" for this suite. • [SLOW TEST:6.105 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":8,"skipped":911,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:08:00.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 Nov 6 01:08:00.743: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-12da4411-1e57-49c9-8631-1e6b0c0d25c9" in namespace "security-context-test-8840" to be "Succeeded or Failed" Nov 6 01:08:00.747: INFO: Pod "busybox-readonly-true-12da4411-1e57-49c9-8631-1e6b0c0d25c9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.986487ms Nov 6 01:08:02.750: INFO: Pod "busybox-readonly-true-12da4411-1e57-49c9-8631-1e6b0c0d25c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006987655s Nov 6 01:08:04.754: INFO: Pod "busybox-readonly-true-12da4411-1e57-49c9-8631-1e6b0c0d25c9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010995166s Nov 6 01:08:06.757: INFO: Pod "busybox-readonly-true-12da4411-1e57-49c9-8631-1e6b0c0d25c9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014087636s Nov 6 01:08:08.761: INFO: Pod "busybox-readonly-true-12da4411-1e57-49c9-8631-1e6b0c0d25c9": Phase="Failed", Reason="", readiness=false. Elapsed: 8.017733883s Nov 6 01:08:08.761: INFO: Pod "busybox-readonly-true-12da4411-1e57-49c9-8631-1e6b0c0d25c9" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:08:08.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8840" for this suite. • [SLOW TEST:8.057 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with readOnlyRootFilesystem /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171 should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":5,"skipped":440,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:08:06.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 Nov 6 01:08:06.484: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-2316cd33-385f-4bc7-bf9e-b861ba3e09e1" in namespace "security-context-test-9167" to be "Succeeded or Failed" Nov 6 01:08:06.487: INFO: Pod "busybox-privileged-true-2316cd33-385f-4bc7-bf9e-b861ba3e09e1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.665633ms Nov 6 01:08:08.491: INFO: Pod "busybox-privileged-true-2316cd33-385f-4bc7-bf9e-b861ba3e09e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007359243s Nov 6 01:08:10.496: INFO: Pod "busybox-privileged-true-2316cd33-385f-4bc7-bf9e-b861ba3e09e1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012758086s Nov 6 01:08:12.499: INFO: Pod "busybox-privileged-true-2316cd33-385f-4bc7-bf9e-b861ba3e09e1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015697659s Nov 6 01:08:14.503: INFO: Pod "busybox-privileged-true-2316cd33-385f-4bc7-bf9e-b861ba3e09e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.019225251s Nov 6 01:08:14.503: INFO: Pod "busybox-privileged-true-2316cd33-385f-4bc7-bf9e-b861ba3e09e1" satisfied condition "Succeeded or Failed" Nov 6 01:08:14.509: INFO: Got logs for pod "busybox-privileged-true-2316cd33-385f-4bc7-bf9e-b861ba3e09e1": "" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:08:14.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9167" for this suite. • [SLOW TEST:8.068 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with privileged /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232 should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":9,"skipped":1045,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:08:15.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:159 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:08:17.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5669" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":10,"skipped":1326,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:06:55.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Nov 6 01:06:55.966: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 Nov 6 01:06:56.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-1019 create -f -' Nov 6 01:06:56.398: INFO: stderr: "" Nov 6 01:06:56.398: INFO: stdout: "pod/liveness-exec created\n" Nov 6 01:06:56.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-1019 create -f -' Nov 6 01:06:56.726: INFO: stderr: "" Nov 6 01:06:56.726: INFO: stdout: "pod/liveness-http created\n" STEP: Check restarts Nov 6 01:07:06.738: INFO: Pod: liveness-exec, restart count:0 Nov 6 01:07:08.741: INFO: Pod: liveness-exec, restart count:0 Nov 6 01:07:10.734: INFO: Pod: liveness-http, restart count:0 Nov 6 01:07:10.744: INFO: Pod: liveness-exec, restart count:0 Nov 6 01:07:12.738: INFO: Pod: liveness-http, restart count:0 Nov 6 01:07:12.748: INFO: Pod: liveness-exec, restart count:0 Nov 6 01:07:14.741: INFO: Pod: liveness-http, restart count:0 Nov 6 01:07:14.751: INFO: Pod: liveness-exec, restart count:0 Nov 6 01:07:16.748: INFO: Pod: liveness-http, restart count:0 Nov 6 01:07:16.754: INFO: Pod: liveness-exec, restart count:0 Nov 6 01:07:18.752: INFO: Pod: liveness-http, restart count:0 Nov 6 01:07:18.758: INFO: Pod: liveness-exec, restart count:0 Nov 6 01:07:20.755: INFO: Pod: liveness-http, restart count:0 Nov 6 01:07:20.762: INFO: Pod: liveness-exec, restart count:0 Nov 6 01:07:22.759: INFO: Pod: liveness-http, restart count:0 Nov 6 01:07:22.768: INFO: Pod: liveness-exec, restart count:0 Nov 6 01:07:24.763: INFO: Pod: liveness-http, restart count:0 Nov 6 01:07:24.770: INFO: Pod: liveness-exec, restart count:0 Nov 6 01:07:26.767: INFO: Pod: liveness-http, restart count:0 Nov 6 01:07:26.773: INFO: Pod: liveness-exec, restart count:0 Nov 6 01:07:28.771: INFO: Pod: liveness-http, restart count:0 Nov 6 01:07:28.777: INFO: Pod: liveness-exec, restart count:0 Nov 6 01:07:30.777: INFO: Pod: liveness-http, restart count:0 Nov 6 01:07:30.779: INFO: Pod: liveness-exec, restart count:0 Nov 6 01:07:32.781: INFO: Pod: liveness-http, restart count:0 Nov 6 01:07:32.782: INFO: Pod: liveness-exec, restart count:0 Nov 6 01:07:34.787: INFO: Pod: liveness-exec, restart count:0 Nov 6 01:07:34.787: INFO: Pod: liveness-http, restart count:0 Nov 6 01:07:36.790: INFO: Pod: liveness-exec, restart count:0 Nov 6 01:07:36.790: INFO: Pod: liveness-http, restart count:0 Nov 6 01:07:38.794: INFO: Pod: liveness-exec, restart count:0 Nov 6 01:07:38.794: INFO: Pod: liveness-http, restart count:0 Nov 6 01:07:40.798: INFO: Pod: liveness-exec, restart count:0 Nov 6 01:07:40.798: INFO: Pod: liveness-http, restart count:0 Nov 6 01:07:42.801: INFO: Pod: liveness-http, restart count:0 Nov 6 01:07:42.801: INFO: Pod: liveness-exec, restart count:0 Nov 6 01:07:44.806: INFO: Pod: liveness-http, restart count:0 Nov 6 01:07:44.806: INFO: Pod: liveness-exec, restart count:0 Nov 6 01:07:46.810: INFO: Pod: liveness-http, restart count:0 Nov 6 01:07:46.811: INFO: Pod: liveness-exec, restart count:0 Nov 6 01:07:48.815: INFO: Pod: liveness-http, restart count:1 Nov 6 01:07:48.815: INFO: Saw liveness-http restart, succeeded... Nov 6 01:07:48.815: INFO: Pod: liveness-exec, restart count:0 Nov 6 01:07:50.818: INFO: Pod: liveness-exec, restart count:0 Nov 6 01:07:52.822: INFO: Pod: liveness-exec, restart count:0 Nov 6 01:07:54.825: INFO: Pod: liveness-exec, restart count:0 Nov 6 01:07:56.830: INFO: Pod: liveness-exec, restart count:0 Nov 6 01:07:58.833: INFO: Pod: liveness-exec, restart count:0 Nov 6 01:08:00.838: INFO: Pod: liveness-exec, restart count:0 Nov 6 01:08:02.842: INFO: Pod: liveness-exec, restart count:0 Nov 6 01:08:04.846: INFO: Pod: liveness-exec, restart count:0 Nov 6 01:08:06.851: INFO: Pod: liveness-exec, restart count:0 Nov 6 01:08:08.855: INFO: Pod: liveness-exec, restart count:0 Nov 6 01:08:10.859: INFO: Pod: liveness-exec, restart count:0 Nov 6 01:08:12.862: INFO: Pod: liveness-exec, restart count:0 Nov 6 01:08:14.870: INFO: Pod: liveness-exec, restart count:0 Nov 6 01:08:16.874: INFO: Pod: liveness-exec, restart count:0 Nov 6 01:08:18.878: INFO: Pod: liveness-exec, restart count:1 Nov 6 01:08:18.878: INFO: Saw liveness-exec restart, succeeded... [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:08:18.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-1019" for this suite. • [SLOW TEST:82.976 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Liveness /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:66 liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Liveness liveness pods should be automatically restarted","total":-1,"completed":2,"skipped":342,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:07:26.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a failing exec liveness probe that took longer than the timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:254 STEP: Creating pod busybox-46790246-9001-4085-ab8b-a88c7bcac761 in namespace container-probe-5203 Nov 6 01:07:38.638: INFO: Started pod busybox-46790246-9001-4085-ab8b-a88c7bcac761 in namespace container-probe-5203 STEP: checking the pod's current state and verifying that restartCount is present Nov 6 01:07:38.640: INFO: Initial restart count of pod busybox-46790246-9001-4085-ab8b-a88c7bcac761 is 0 Nov 6 01:08:20.730: INFO: Restart count of pod container-probe-5203/busybox-46790246-9001-4085-ab8b-a88c7bcac761 is now 1 (42.090040953s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:08:20.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5203" for this suite. • [SLOW TEST:54.151 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a failing exec liveness probe that took longer than the timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:254 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout","total":-1,"completed":3,"skipped":130,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:08:08.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:446 [It] should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 STEP: creating the pod that should always exit 0 STEP: submitting the pod to kubernetes Nov 6 01:08:08.990: INFO: Waiting up to 5m0s for pod "pod-always-succeed9c9a1c76-b44e-44c4-9f31-22d717a6232c" in namespace "pods-9988" to be "Succeeded or Failed" Nov 6 01:08:08.992: INFO: Pod "pod-always-succeed9c9a1c76-b44e-44c4-9f31-22d717a6232c": Phase="Pending", Reason="", readiness=false. Elapsed: 1.77428ms Nov 6 01:08:10.995: INFO: Pod "pod-always-succeed9c9a1c76-b44e-44c4-9f31-22d717a6232c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004786347s Nov 6 01:08:13.001: INFO: Pod "pod-always-succeed9c9a1c76-b44e-44c4-9f31-22d717a6232c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011058069s Nov 6 01:08:15.005: INFO: Pod "pod-always-succeed9c9a1c76-b44e-44c4-9f31-22d717a6232c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015556866s Nov 6 01:08:17.010: INFO: Pod "pod-always-succeed9c9a1c76-b44e-44c4-9f31-22d717a6232c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.01991653s Nov 6 01:08:19.013: INFO: Pod "pod-always-succeed9c9a1c76-b44e-44c4-9f31-22d717a6232c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.023278143s STEP: Saw pod success Nov 6 01:08:19.013: INFO: Pod "pod-always-succeed9c9a1c76-b44e-44c4-9f31-22d717a6232c" satisfied condition "Succeeded or Failed" STEP: Getting events about the pod STEP: Checking events about the pod STEP: deleting the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:08:21.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9988" for this suite. • [SLOW TEST:12.077 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:444 should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done","total":-1,"completed":6,"skipped":534,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:08:21.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-problem-detector STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:52 Nov 6 01:08:21.136: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:08:21.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-problem-detector-5977" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should run without error [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:60 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:53 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:08:19.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:201 STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node STEP: Watching for error events or started pod STEP: Checking that the pod was rejected [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:08:21.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-4504" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]","total":-1,"completed":3,"skipped":558,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:08:17.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 Nov 6 01:08:17.199: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-71909972-b4cc-4b32-91e7-87af244f5613" in namespace "security-context-test-9404" to be "Succeeded or Failed" Nov 6 01:08:17.203: INFO: Pod "alpine-nnp-nil-71909972-b4cc-4b32-91e7-87af244f5613": Phase="Pending", Reason="", readiness=false. Elapsed: 3.708272ms Nov 6 01:08:19.205: INFO: Pod "alpine-nnp-nil-71909972-b4cc-4b32-91e7-87af244f5613": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006520134s Nov 6 01:08:21.209: INFO: Pod "alpine-nnp-nil-71909972-b4cc-4b32-91e7-87af244f5613": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010476285s Nov 6 01:08:23.215: INFO: Pod "alpine-nnp-nil-71909972-b4cc-4b32-91e7-87af244f5613": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015804014s Nov 6 01:08:25.219: INFO: Pod "alpine-nnp-nil-71909972-b4cc-4b32-91e7-87af244f5613": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020554349s Nov 6 01:08:27.224: INFO: Pod "alpine-nnp-nil-71909972-b4cc-4b32-91e7-87af244f5613": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.025087099s Nov 6 01:08:27.224: INFO: Pod "alpine-nnp-nil-71909972-b4cc-4b32-91e7-87af244f5613" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:08:27.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9404" for this suite. • [SLOW TEST:10.081 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":11,"skipped":1354,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:08:21.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 STEP: create image pull secret STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:08:32.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5780" for this suite. • [SLOW TEST:11.136 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":7,"skipped":594,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:07:20.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237 STEP: Creating pod busybox-3a76afca-cc22-4bcf-aa47-01eefae19589 in namespace container-probe-3411 Nov 6 01:07:32.901: INFO: Started pod busybox-3a76afca-cc22-4bcf-aa47-01eefae19589 in namespace container-probe-3411 Nov 6 01:07:32.901: INFO: pod container-probe-3411/busybox-3a76afca-cc22-4bcf-aa47-01eefae19589 is not ready (894ns elapsed) Nov 6 01:07:34.903: INFO: pod container-probe-3411/busybox-3a76afca-cc22-4bcf-aa47-01eefae19589 is not ready (2.001565815s elapsed) Nov 6 01:07:36.904: INFO: pod container-probe-3411/busybox-3a76afca-cc22-4bcf-aa47-01eefae19589 is not ready (4.002567172s elapsed) Nov 6 01:07:38.905: INFO: pod container-probe-3411/busybox-3a76afca-cc22-4bcf-aa47-01eefae19589 is not ready (6.003735403s elapsed) Nov 6 01:07:40.910: INFO: pod container-probe-3411/busybox-3a76afca-cc22-4bcf-aa47-01eefae19589 is not ready (8.008714034s elapsed) Nov 6 01:07:42.911: INFO: pod container-probe-3411/busybox-3a76afca-cc22-4bcf-aa47-01eefae19589 is not ready (10.009267809s elapsed) Nov 6 01:07:44.911: INFO: pod container-probe-3411/busybox-3a76afca-cc22-4bcf-aa47-01eefae19589 is not ready (12.009686806s elapsed) Nov 6 01:07:46.913: INFO: pod container-probe-3411/busybox-3a76afca-cc22-4bcf-aa47-01eefae19589 is not ready (14.011937408s elapsed) Nov 6 01:07:48.914: INFO: pod container-probe-3411/busybox-3a76afca-cc22-4bcf-aa47-01eefae19589 is not ready (16.012853894s elapsed) Nov 6 01:07:50.915: INFO: pod container-probe-3411/busybox-3a76afca-cc22-4bcf-aa47-01eefae19589 is not ready (18.014180979s elapsed) Nov 6 01:07:52.916: INFO: pod container-probe-3411/busybox-3a76afca-cc22-4bcf-aa47-01eefae19589 is not ready (20.014875183s elapsed) Nov 6 01:07:54.916: INFO: pod container-probe-3411/busybox-3a76afca-cc22-4bcf-aa47-01eefae19589 is not ready (22.015253472s elapsed) Nov 6 01:07:56.918: INFO: pod container-probe-3411/busybox-3a76afca-cc22-4bcf-aa47-01eefae19589 is not ready (24.016516488s elapsed) Nov 6 01:07:58.918: INFO: pod container-probe-3411/busybox-3a76afca-cc22-4bcf-aa47-01eefae19589 is not ready (26.01717778s elapsed) Nov 6 01:08:00.920: INFO: pod container-probe-3411/busybox-3a76afca-cc22-4bcf-aa47-01eefae19589 is not ready (28.018747966s elapsed) Nov 6 01:08:02.920: INFO: pod container-probe-3411/busybox-3a76afca-cc22-4bcf-aa47-01eefae19589 is not ready (30.01901816s elapsed) Nov 6 01:08:04.921: INFO: pod container-probe-3411/busybox-3a76afca-cc22-4bcf-aa47-01eefae19589 is not ready (32.019413135s elapsed) Nov 6 01:08:06.922: INFO: pod container-probe-3411/busybox-3a76afca-cc22-4bcf-aa47-01eefae19589 is not ready (34.020682874s elapsed) Nov 6 01:08:08.923: INFO: pod container-probe-3411/busybox-3a76afca-cc22-4bcf-aa47-01eefae19589 is not ready (36.021742092s elapsed) Nov 6 01:08:10.924: INFO: pod container-probe-3411/busybox-3a76afca-cc22-4bcf-aa47-01eefae19589 is not ready (38.023189917s elapsed) Nov 6 01:08:12.925: INFO: pod container-probe-3411/busybox-3a76afca-cc22-4bcf-aa47-01eefae19589 is not ready (40.023849548s elapsed) Nov 6 01:08:14.925: INFO: pod container-probe-3411/busybox-3a76afca-cc22-4bcf-aa47-01eefae19589 is not ready (42.024171715s elapsed) Nov 6 01:08:16.927: INFO: pod container-probe-3411/busybox-3a76afca-cc22-4bcf-aa47-01eefae19589 is not ready (44.025423951s elapsed) Nov 6 01:08:18.928: INFO: pod container-probe-3411/busybox-3a76afca-cc22-4bcf-aa47-01eefae19589 is not ready (46.026337754s elapsed) Nov 6 01:08:20.929: INFO: pod container-probe-3411/busybox-3a76afca-cc22-4bcf-aa47-01eefae19589 is not ready (48.027589623s elapsed) Nov 6 01:08:22.929: INFO: pod container-probe-3411/busybox-3a76afca-cc22-4bcf-aa47-01eefae19589 is not ready (50.027920787s elapsed) Nov 6 01:08:24.930: INFO: pod container-probe-3411/busybox-3a76afca-cc22-4bcf-aa47-01eefae19589 is not ready (52.029087628s elapsed) Nov 6 01:08:26.932: INFO: pod container-probe-3411/busybox-3a76afca-cc22-4bcf-aa47-01eefae19589 is not ready (54.030302181s elapsed) Nov 6 01:08:28.933: INFO: pod container-probe-3411/busybox-3a76afca-cc22-4bcf-aa47-01eefae19589 is not ready (56.031406027s elapsed) Nov 6 01:08:30.934: INFO: pod container-probe-3411/busybox-3a76afca-cc22-4bcf-aa47-01eefae19589 is not ready (58.032629329s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:08:32.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3411" for this suite. • [SLOW TEST:72.089 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237 ------------------------------ {"msg":"PASSED [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":2,"skipped":122,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:08:33.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should reject a Pod requesting a RuntimeClass with conflicting node selector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:41 [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:08:33.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-2194" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector","total":-1,"completed":3,"skipped":163,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:08:27.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367 Nov 6 01:08:27.537: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-60b8b3b1-c0c6-43d7-a6bd-5685d2b2ad97" in namespace "security-context-test-5214" to be "Succeeded or Failed" Nov 6 01:08:27.542: INFO: Pod "alpine-nnp-true-60b8b3b1-c0c6-43d7-a6bd-5685d2b2ad97": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020589ms Nov 6 01:08:29.545: INFO: Pod "alpine-nnp-true-60b8b3b1-c0c6-43d7-a6bd-5685d2b2ad97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007705519s Nov 6 01:08:31.554: INFO: Pod "alpine-nnp-true-60b8b3b1-c0c6-43d7-a6bd-5685d2b2ad97": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016192755s Nov 6 01:08:33.557: INFO: Pod "alpine-nnp-true-60b8b3b1-c0c6-43d7-a6bd-5685d2b2ad97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019871488s Nov 6 01:08:33.557: INFO: Pod "alpine-nnp-true-60b8b3b1-c0c6-43d7-a6bd-5685d2b2ad97" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:08:33.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5214" for this suite. • [SLOW TEST:6.101 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":12,"skipped":1494,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:08:32.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:400 E1106 01:08:36.372261 24 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 249 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x653b640, 0x9beb6a0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86 panic(0x653b640, 0x9beb6a0) /usr/local/go/src/runtime/panic.go:965 +0x1b9 k8s.io/kubernetes/test/e2e/framework/pod.podContainerStarted.func1(0xc003d30f0c, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/resource.go:334 +0x17b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc004d16f40, 0xc003d30f00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitFor(0xc00316f6b0, 0xc004d16f40, 0xc00438aa20, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:541 +0x128 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollInternal(0xc00316f6b0, 0xc004d16f40, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:427 +0x87 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc00316f6b0, 0xc004d16f40, 0xc00316f6b0, 0xc004d16f40) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:452 +0x74 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc004d16f40, 0x14, 0xc00538f8c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodContainerStarted(0x779f8f8, 0xc001b23080, 0xc003cbb608, 0x14, 0xc00538f8c0, 0x2c, 0x0, 0x45d964b800, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/wait.go:554 +0x92 k8s.io/kubernetes/test/e2e/common/node.glob..func2.18() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:417 +0x39e k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0010553e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0010553e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc00103bbe0, 0x768f9a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc0007a71d0, 0x0, 0x768f9a0, 0xc0001ee800) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:215 +0x72f k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc0007a71d0, 0x768f9a0, 0xc0001ee800) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc0002c8280, 0xc0007a71d0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc0002c8280, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x147 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc0002c8280, 0xc000c78430) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0001cc230, 0x7f740c496018, 0xc0010c8600, 0x6f05d9d, 0x14, 0xc004486780, 0x3, 0x3, 0x7745ab8, 0xc0001ee800, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x7694a60, 0xc0010c8600, 0x6f05d9d, 0x14, 0xc00453dfc0, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x218 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x7694a60, 0xc0010c8600, 0x6f05d9d, 0x14, 0xc0045603e0, 0x2, 0x2, 0x25) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0010c8600) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0010c8600) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0010c8600, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "container-probe-7558". STEP: Found 5 events. Nov 6 01:08:36.375: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for startup-bafffb4b-c532-4b90-920d-29c26feb7e62: { } Scheduled: Successfully assigned container-probe-7558/startup-bafffb4b-c532-4b90-920d-29c26feb7e62 to node2 Nov 6 01:08:36.375: INFO: At 2021-11-06 01:08:34 +0000 UTC - event for startup-bafffb4b-c532-4b90-920d-29c26feb7e62: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" Nov 6 01:08:36.375: INFO: At 2021-11-06 01:08:35 +0000 UTC - event for startup-bafffb4b-c532-4b90-920d-29c26feb7e62: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" in 492.728229ms Nov 6 01:08:36.375: INFO: At 2021-11-06 01:08:35 +0000 UTC - event for startup-bafffb4b-c532-4b90-920d-29c26feb7e62: {kubelet node2} Created: Created container busybox Nov 6 01:08:36.375: INFO: At 2021-11-06 01:08:35 +0000 UTC - event for startup-bafffb4b-c532-4b90-920d-29c26feb7e62: {kubelet node2} Started: Started container busybox Nov 6 01:08:36.377: INFO: POD NODE PHASE GRACE CONDITIONS Nov 6 01:08:36.377: INFO: startup-bafffb4b-c532-4b90-920d-29c26feb7e62 node2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:08:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:08:32 +0000 UTC ContainersNotReady containers with unready status: [busybox]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:08:32 +0000 UTC ContainersNotReady containers with unready status: [busybox]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:08:32 +0000 UTC }] Nov 6 01:08:36.377: INFO: Nov 6 01:08:36.382: INFO: Logging node info for node master1 Nov 6 01:08:36.384: INFO: Node Info: &Node{ObjectMeta:{master1 acabf68f-e6fa-4376-87a7-953399a106b3 85444 0 2021-11-05 20:58:52 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-05 20:58:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-05 21:01:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-05 21:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-05 21:06:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:29 +0000 UTC,LastTransitionTime:2021-11-05 21:04:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-06 01:08:29 +0000 UTC,LastTransitionTime:2021-11-05 20:58:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-06 01:08:29 +0000 UTC,LastTransitionTime:2021-11-05 20:58:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-06 01:08:29 +0000 UTC,LastTransitionTime:2021-11-05 20:58:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-06 01:08:29 +0000 UTC,LastTransitionTime:2021-11-05 21:01:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b66bbe4d404942179ce344aa1da0c494,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:b59c0f0e-9c14-460c-acfa-6e83037bd04e,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:50c0d65dc6b2d7618e01dd2fb431c2312ad7490d0461d040b112b04809b07401 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 6 01:08:36.385: INFO: Logging kubelet events for node master1 Nov 6 01:08:36.387: INFO: Logging pods the kubelet thinks is on node master1 Nov 6 01:08:36.396: INFO: kube-apiserver-master1 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:36.396: INFO: Container kube-apiserver ready: true, restart count 0 Nov 6 01:08:36.396: INFO: kube-controller-manager-master1 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:36.396: INFO: Container kube-controller-manager ready: true, restart count 3 Nov 6 01:08:36.396: INFO: kube-scheduler-master1 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:36.396: INFO: Container kube-scheduler ready: true, restart count 0 Nov 6 01:08:36.396: INFO: kube-flannel-hkkhj started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 6 01:08:36.396: INFO: Init container install-cni ready: true, restart count 2 Nov 6 01:08:36.396: INFO: Container kube-flannel ready: true, restart count 2 Nov 6 01:08:36.396: INFO: coredns-8474476ff8-nq2jw started at 2021-11-05 21:02:14 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:36.396: INFO: Container coredns ready: true, restart count 2 Nov 6 01:08:36.396: INFO: node-exporter-lgdzv started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 6 01:08:36.396: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 6 01:08:36.396: INFO: Container node-exporter ready: true, restart count 0 Nov 6 01:08:36.396: INFO: kube-proxy-r4cf7 started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:36.396: INFO: Container kube-proxy ready: true, restart count 1 Nov 6 01:08:36.396: INFO: kube-multus-ds-amd64-rr699 started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:36.396: INFO: Container kube-multus ready: true, restart count 1 Nov 6 01:08:36.396: INFO: container-registry-65d7c44b96-dwrs5 started at 2021-11-05 21:06:01 +0000 UTC (0+2 container statuses recorded) Nov 6 01:08:36.396: INFO: Container docker-registry ready: true, restart count 0 Nov 6 01:08:36.397: INFO: Container nginx ready: true, restart count 0 W1106 01:08:36.410944 24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 6 01:08:36.489: INFO: Latency metrics for node master1 Nov 6 01:08:36.489: INFO: Logging node info for node master2 Nov 6 01:08:36.492: INFO: Node Info: &Node{ObjectMeta:{master2 004d4571-8588-4d18-93d0-ad0af4174866 85466 0 2021-11-05 20:59:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-05 20:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-11-05 21:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-11-05 21:01:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-05 21:09:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-11-05 21:09:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:41 +0000 UTC,LastTransitionTime:2021-11-05 21:04:41 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-06 01:08:31 +0000 UTC,LastTransitionTime:2021-11-05 20:59:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-06 01:08:31 +0000 UTC,LastTransitionTime:2021-11-05 20:59:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-06 01:08:31 +0000 UTC,LastTransitionTime:2021-11-05 20:59:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-06 01:08:31 +0000 UTC,LastTransitionTime:2021-11-05 21:01:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0f1bc4b4acc1463992265eb9f006d2f4,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:d0e797a3-7d35-4e63-b584-b18006ef67fe,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 6 01:08:36.492: INFO: Logging kubelet events for node master2 Nov 6 01:08:36.494: INFO: Logging pods the kubelet thinks is on node master2 Nov 6 01:08:36.510: INFO: node-exporter-8mxjv started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 6 01:08:36.510: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 6 01:08:36.510: INFO: Container node-exporter ready: true, restart count 0 Nov 6 01:08:36.510: INFO: kube-apiserver-master2 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:36.510: INFO: Container kube-apiserver ready: true, restart count 0 Nov 6 01:08:36.510: INFO: kube-scheduler-master2 started at 2021-11-05 21:08:18 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:36.510: INFO: Container kube-scheduler ready: true, restart count 3 Nov 6 01:08:36.510: INFO: kube-multus-ds-amd64-m5646 started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:36.510: INFO: Container kube-multus ready: true, restart count 1 Nov 6 01:08:36.510: INFO: node-feature-discovery-controller-cff799f9f-8cg9j started at 2021-11-05 21:09:34 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:36.510: INFO: Container nfd-controller ready: true, restart count 0 Nov 6 01:08:36.510: INFO: kube-controller-manager-master2 started at 2021-11-05 21:04:18 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:36.510: INFO: Container kube-controller-manager ready: true, restart count 2 Nov 6 01:08:36.510: INFO: kube-proxy-9vm9v started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:36.510: INFO: Container kube-proxy ready: true, restart count 1 Nov 6 01:08:36.510: INFO: kube-flannel-g7q4k started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 6 01:08:36.510: INFO: Init container install-cni ready: true, restart count 0 Nov 6 01:08:36.510: INFO: Container kube-flannel ready: true, restart count 3 W1106 01:08:36.523564 24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 6 01:08:36.594: INFO: Latency metrics for node master2 Nov 6 01:08:36.594: INFO: Logging node info for node master3 Nov 6 01:08:36.598: INFO: Node Info: &Node{ObjectMeta:{master3 d3395dfc-1d8f-4527-88b4-7f472f6a6c0f 85552 0 2021-11-05 20:59:34 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-05 20:59:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-05 21:01:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-05 21:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-05 21:12:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:26 +0000 UTC,LastTransitionTime:2021-11-05 21:04:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-06 01:08:35 +0000 UTC,LastTransitionTime:2021-11-05 20:59:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-06 01:08:35 +0000 UTC,LastTransitionTime:2021-11-05 20:59:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-06 01:08:35 +0000 UTC,LastTransitionTime:2021-11-05 20:59:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-06 01:08:35 +0000 UTC,LastTransitionTime:2021-11-05 21:04:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:006015d4e2a7441aa293fbb9db938e38,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:a0f65291-184f-4994-a7ea-d1a5b4d71ffa,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 6 01:08:36.598: INFO: Logging kubelet events for node master3 Nov 6 01:08:36.601: INFO: Logging pods the kubelet thinks is on node master3 Nov 6 01:08:36.612: INFO: kube-controller-manager-master3 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:36.612: INFO: Container kube-controller-manager ready: true, restart count 2 Nov 6 01:08:36.612: INFO: kube-flannel-f55xz started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 6 01:08:36.612: INFO: Init container install-cni ready: true, restart count 0 Nov 6 01:08:36.612: INFO: Container kube-flannel ready: true, restart count 1 Nov 6 01:08:36.612: INFO: coredns-8474476ff8-qbn9j started at 2021-11-05 21:02:10 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:36.612: INFO: Container coredns ready: true, restart count 1 Nov 6 01:08:36.612: INFO: node-exporter-mqcvx started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 6 01:08:36.612: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 6 01:08:36.612: INFO: Container node-exporter ready: true, restart count 0 Nov 6 01:08:36.612: INFO: kube-apiserver-master3 started at 2021-11-05 21:04:19 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:36.612: INFO: Container kube-apiserver ready: true, restart count 0 Nov 6 01:08:36.612: INFO: kube-proxy-s2pzt started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:36.612: INFO: Container kube-proxy ready: true, restart count 2 Nov 6 01:08:36.612: INFO: kube-multus-ds-amd64-cp25f started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:36.612: INFO: Container kube-multus ready: true, restart count 1 Nov 6 01:08:36.612: INFO: dns-autoscaler-7df78bfcfb-z9dxm started at 2021-11-05 21:02:12 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:36.612: INFO: Container autoscaler ready: true, restart count 1 Nov 6 01:08:36.612: INFO: kube-scheduler-master3 started at 2021-11-05 21:08:19 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:36.612: INFO: Container kube-scheduler ready: true, restart count 3 W1106 01:08:36.623407 24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 6 01:08:36.699: INFO: Latency metrics for node master3 Nov 6 01:08:36.699: INFO: Logging node info for node node1 Nov 6 01:08:36.702: INFO: Node Info: &Node{ObjectMeta:{node1 290b18e7-da33-4da8-b78a-8a7f28c49abf 85463 0 2021-11-05 21:00:39 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-05 21:01:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-05 21:09:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-05 21:13:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-05 23:53:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:40 +0000 UTC,LastTransitionTime:2021-11-05 21:04:40 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-06 01:08:30 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-06 01:08:30 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-06 01:08:30 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-06 01:08:30 +0000 UTC,LastTransitionTime:2021-11-05 21:01:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7f2fc144f1734ec29780a435d0602675,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:7c24c54c-15ba-4c20-b196-32ad0c82be71,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432896,},ContainerImage{Names:[localhost:30500/cmk@sha256:8c85a99f0d621f4580064e48909402b99a2e98aff1c1e4164e4e52d46568c5be cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:d226f3fd5f293ff513f53573a40c069b89d57d42338a1045b493bf702ac6b1f6 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.5],SizeBytes:60182158,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:e30d246f33998514fb41fc34b11174e739efb54af715b53723b81cc5d2ebd29d nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:42321438,},ContainerImage{Names:[localhost:30500/tasextender@sha256:50c0d65dc6b2d7618e01dd2fb431c2312ad7490d0461d040b112b04809b07401 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 6 01:08:36.703: INFO: Logging kubelet events for node node1 Nov 6 01:08:36.705: INFO: Logging pods the kubelet thinks is on node node1 Nov 6 01:08:36.720: INFO: collectd-5k6s9 started at 2021-11-05 21:18:40 +0000 UTC (0+3 container statuses recorded) Nov 6 01:08:36.720: INFO: Container collectd ready: true, restart count 0 Nov 6 01:08:36.720: INFO: Container collectd-exporter ready: true, restart count 0 Nov 6 01:08:36.720: INFO: Container rbac-proxy ready: true, restart count 0 Nov 6 01:08:36.720: INFO: nginx-proxy-node1 started at 2021-11-05 21:00:39 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:36.720: INFO: Container nginx-proxy ready: true, restart count 2 Nov 6 01:08:36.720: INFO: kubernetes-dashboard-785dcbb76d-9wtdz started at 2021-11-05 21:02:14 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:36.720: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 6 01:08:36.720: INFO: cmk-webhook-6c9d5f8578-wq5mk started at 2021-11-05 21:13:47 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:36.720: INFO: Container cmk-webhook ready: true, restart count 0 Nov 6 01:08:36.720: INFO: node-feature-discovery-worker-spmbf started at 2021-11-05 21:09:34 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:36.720: INFO: Container nfd-worker ready: true, restart count 0 Nov 6 01:08:36.720: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-l4npn started at 2021-11-05 21:10:45 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:36.720: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 6 01:08:36.720: INFO: startup-67709499-b08b-48fb-8a2d-dbfb707ee467 started at 2021-11-06 01:06:54 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:36.720: INFO: Container busybox ready: false, restart count 0 Nov 6 01:08:36.720: INFO: kube-flannel-hxwks started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 6 01:08:36.720: INFO: Init container install-cni ready: true, restart count 2 Nov 6 01:08:36.720: INFO: Container kube-flannel ready: true, restart count 3 Nov 6 01:08:36.720: INFO: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s started at 2021-11-05 21:17:51 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:36.720: INFO: Container tas-extender ready: true, restart count 0 Nov 6 01:08:36.720: INFO: startup-70755378-314e-486c-ad82-5c9f67a8026f started at 2021-11-06 01:08:33 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:36.720: INFO: Container busybox ready: false, restart count 0 Nov 6 01:08:36.720: INFO: liveness-exec started at 2021-11-06 01:06:56 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:36.720: INFO: Container liveness-exec ready: true, restart count 1 Nov 6 01:08:36.720: INFO: kube-multus-ds-amd64-mqrl8 started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:36.720: INFO: Container kube-multus ready: true, restart count 1 Nov 6 01:08:36.720: INFO: cmk-init-discover-node1-nnkks started at 2021-11-05 21:13:04 +0000 UTC (0+3 container statuses recorded) Nov 6 01:08:36.720: INFO: Container discover ready: false, restart count 0 Nov 6 01:08:36.720: INFO: Container init ready: false, restart count 0 Nov 6 01:08:36.720: INFO: Container install ready: false, restart count 0 Nov 6 01:08:36.720: INFO: node-exporter-fvksz started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 6 01:08:36.720: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 6 01:08:36.720: INFO: Container node-exporter ready: true, restart count 0 Nov 6 01:08:36.720: INFO: kube-proxy-mc4cs started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:36.720: INFO: Container kube-proxy ready: true, restart count 2 Nov 6 01:08:36.720: INFO: cmk-cfm9r started at 2021-11-05 21:13:47 +0000 UTC (0+2 container statuses recorded) Nov 6 01:08:36.720: INFO: Container nodereport ready: true, restart count 0 Nov 6 01:08:36.720: INFO: Container reconcile ready: true, restart count 0 Nov 6 01:08:36.720: INFO: prometheus-k8s-0 started at 2021-11-05 21:14:58 +0000 UTC (0+4 container statuses recorded) Nov 6 01:08:36.720: INFO: Container config-reloader ready: true, restart count 0 Nov 6 01:08:36.720: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 6 01:08:36.720: INFO: Container grafana ready: true, restart count 0 Nov 6 01:08:36.720: INFO: Container prometheus ready: true, restart count 1 Nov 6 01:08:36.720: INFO: busybox-5f9c8558-c8cd-411e-b1b5-39696b0a6664 started at 2021-11-06 01:08:21 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:36.720: INFO: Container busybox ready: true, restart count 0 W1106 01:08:36.733756 24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 6 01:08:36.960: INFO: Latency metrics for node node1 Nov 6 01:08:36.960: INFO: Logging node info for node node2 Nov 6 01:08:36.963: INFO: Node Info: &Node{ObjectMeta:{node2 7d7e71f0-82d7-49ba-b69a-56600dd59b3f 85465 0 2021-11-05 21:00:39 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-05 21:01:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-05 21:09:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-05 21:13:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-05 23:54:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2021-11-06 01:06:39 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:43 +0000 UTC,LastTransitionTime:2021-11-05 21:04:43 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-06 01:08:30 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-06 01:08:30 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-06 01:08:30 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-06 01:08:30 +0000 UTC,LastTransitionTime:2021-11-05 21:01:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:415d65c0f8484c488059b324e675b5bd,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:c5482a76-3a9a-45bb-ab12-c74550bfe71f,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[localhost:30500/cmk@sha256:8c85a99f0d621f4580064e48909402b99a2e98aff1c1e4164e4e52d46568c5be localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:e30d246f33998514fb41fc34b11174e739efb54af715b53723b81cc5d2ebd29d localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:42321438,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 6 01:08:36.964: INFO: Logging kubelet events for node node2 Nov 6 01:08:36.965: INFO: Logging pods the kubelet thinks is on node node2 Nov 6 01:08:37.278: INFO: kubernetes-metrics-scraper-5558854cb-v9vgg started at 2021-11-05 21:02:14 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:37.278: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 6 01:08:37.278: INFO: kube-flannel-cqj7j started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 6 01:08:37.278: INFO: Init container install-cni ready: true, restart count 1 Nov 6 01:08:37.278: INFO: Container kube-flannel ready: true, restart count 2 Nov 6 01:08:37.278: INFO: kube-proxy-j9lmg started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:37.278: INFO: Container kube-proxy ready: true, restart count 2 Nov 6 01:08:37.278: INFO: pod-submit-status-0-9 started at 2021-11-06 01:08:24 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:37.278: INFO: Container busybox ready: false, restart count 0 Nov 6 01:08:37.278: INFO: alpine-nnp-true-60b8b3b1-c0c6-43d7-a6bd-5685d2b2ad97 started at 2021-11-06 01:08:27 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:37.278: INFO: Container alpine-nnp-true-60b8b3b1-c0c6-43d7-a6bd-5685d2b2ad97 ready: false, restart count 0 Nov 6 01:08:37.278: INFO: liveness-5bc19d7f-6e2d-4d9d-819f-09f0111af089 started at 2021-11-06 01:06:59 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:37.278: INFO: Container agnhost-container ready: true, restart count 0 Nov 6 01:08:37.278: INFO: startup-b0a9397f-ce21-403f-901e-4255af402749 started at 2021-11-06 01:07:43 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:37.278: INFO: Container busybox ready: true, restart count 0 Nov 6 01:08:37.278: INFO: kube-multus-ds-amd64-p7bxx started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:37.278: INFO: Container kube-multus ready: true, restart count 1 Nov 6 01:08:37.278: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-tzh4p started at 2021-11-05 21:10:45 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:37.278: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 6 01:08:37.278: INFO: collectd-r2g57 started at 2021-11-05 21:18:40 +0000 UTC (0+3 container statuses recorded) Nov 6 01:08:37.278: INFO: Container collectd ready: true, restart count 0 Nov 6 01:08:37.278: INFO: Container collectd-exporter ready: true, restart count 0 Nov 6 01:08:37.278: INFO: Container rbac-proxy ready: true, restart count 0 Nov 6 01:08:37.278: INFO: cmk-bnvd2 started at 2021-11-05 21:13:46 +0000 UTC (0+2 container statuses recorded) Nov 6 01:08:37.278: INFO: Container nodereport ready: true, restart count 0 Nov 6 01:08:37.278: INFO: Container reconcile ready: true, restart count 0 Nov 6 01:08:37.278: INFO: prometheus-operator-585ccfb458-vh55q started at 2021-11-05 21:14:41 +0000 UTC (0+2 container statuses recorded) Nov 6 01:08:37.278: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 6 01:08:37.278: INFO: Container prometheus-operator ready: true, restart count 0 Nov 6 01:08:37.278: INFO: liveness-e90e4765-128a-4324-a872-887266f701f3 started at 2021-11-06 01:08:20 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:37.278: INFO: Container agnhost-container ready: true, restart count 0 Nov 6 01:08:37.278: INFO: nginx-proxy-node2 started at 2021-11-05 21:00:39 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:37.278: INFO: Container nginx-proxy ready: true, restart count 2 Nov 6 01:08:37.278: INFO: node-feature-discovery-worker-pn6cr started at 2021-11-05 21:09:34 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:37.278: INFO: Container nfd-worker ready: true, restart count 0 Nov 6 01:08:37.278: INFO: startup-bafffb4b-c532-4b90-920d-29c26feb7e62 started at 2021-11-06 01:08:32 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:37.278: INFO: Container busybox ready: false, restart count 0 Nov 6 01:08:37.278: INFO: back-off-cap started at 2021-11-06 01:06:57 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:37.278: INFO: Container back-off-cap ready: false, restart count 3 Nov 6 01:08:37.278: INFO: cmk-init-discover-node2-9svdd started at 2021-11-05 21:13:24 +0000 UTC (0+3 container statuses recorded) Nov 6 01:08:37.278: INFO: Container discover ready: false, restart count 0 Nov 6 01:08:37.278: INFO: Container init ready: false, restart count 0 Nov 6 01:08:37.279: INFO: Container install ready: false, restart count 0 Nov 6 01:08:37.279: INFO: node-exporter-k7p79 started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 6 01:08:37.279: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 6 01:08:37.279: INFO: Container node-exporter ready: true, restart count 0 Nov 6 01:08:37.279: INFO: pod-submit-status-1-7 started at 2021-11-06 01:08:18 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:37.279: INFO: Container busybox ready: false, restart count 0 Nov 6 01:08:37.279: INFO: pod-submit-status-2-9 started at 2021-11-06 01:08:30 +0000 UTC (0+1 container statuses recorded) Nov 6 01:08:37.279: INFO: Container busybox ready: false, restart count 0 W1106 01:08:37.291294 24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 6 01:08:37.705: INFO: Latency metrics for node node2 Nov 6 01:08:37.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7558" for this suite. •! Panic [5.393 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be ready immediately after startupProbe succeeds [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:400 Test Panicked runtime error: invalid memory address or nil pointer dereference /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 Full Stack Trace k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x109 panic(0x653b640, 0x9beb6a0) /usr/local/go/src/runtime/panic.go:965 +0x1b9 k8s.io/kubernetes/test/e2e/framework/pod.podContainerStarted.func1(0xc003d30f0c, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/resource.go:334 +0x17b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc004d16f40, 0xc003d30f00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitFor(0xc00316f6b0, 0xc004d16f40, 0xc00438aa20, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:541 +0x128 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollInternal(0xc00316f6b0, 0xc004d16f40, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:427 +0x87 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc00316f6b0, 0xc004d16f40, 0xc00316f6b0, 0xc004d16f40) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:452 +0x74 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc004d16f40, 0x14, 0xc00538f8c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodContainerStarted(0x779f8f8, 0xc001b23080, 0xc003cbb608, 0x14, 0xc00538f8c0, 0x2c, 0x0, 0x45d964b800, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/wait.go:554 +0x92 k8s.io/kubernetes/test/e2e/common/node.glob..func2.18() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:417 +0x39e k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0010c8600) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0010c8600) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0010c8600, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:07:43.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:371 STEP: Creating pod startup-b0a9397f-ce21-403f-901e-4255af402749 in namespace container-probe-5627 Nov 6 01:07:53.101: INFO: Started pod startup-b0a9397f-ce21-403f-901e-4255af402749 in namespace container-probe-5627 STEP: checking the pod's current state and verifying that restartCount is present Nov 6 01:07:53.103: INFO: Initial restart count of pod startup-b0a9397f-ce21-403f-901e-4255af402749 is 0 Nov 6 01:08:45.208: INFO: Restart count of pod container-probe-5627/startup-b0a9397f-ce21-403f-901e-4255af402749 is now 1 (52.105361797s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:08:45.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5627" for this suite. • [SLOW TEST:62.164 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:371 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted by liveness probe after startup probe enables it","total":-1,"completed":5,"skipped":809,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:08:20.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274 STEP: Creating pod liveness-e90e4765-128a-4324-a872-887266f701f3 in namespace container-probe-2606 Nov 6 01:08:30.839: INFO: Started pod liveness-e90e4765-128a-4324-a872-887266f701f3 in namespace container-probe-2606 STEP: checking the pod's current state and verifying that restartCount is present Nov 6 01:08:30.841: INFO: Initial restart count of pod liveness-e90e4765-128a-4324-a872-887266f701f3 is 0 Nov 6 01:08:52.884: INFO: Restart count of pod container-probe-2606/liveness-e90e4765-128a-4324-a872-887266f701f3 is now 1 (22.042018349s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:08:52.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2606" for this suite. • [SLOW TEST:32.106 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":4,"skipped":151,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:08:53.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups Nov 6 01:08:53.070: INFO: Waiting up to 5m0s for pod "security-context-2b313a94-996f-4ebc-9ac6-d4d60792535b" in namespace "security-context-1929" to be "Succeeded or Failed" Nov 6 01:08:53.073: INFO: Pod "security-context-2b313a94-996f-4ebc-9ac6-d4d60792535b": Phase="Pending", Reason="", readiness=false. Elapsed: 1.974979ms Nov 6 01:08:55.078: INFO: Pod "security-context-2b313a94-996f-4ebc-9ac6-d4d60792535b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007172139s Nov 6 01:08:57.081: INFO: Pod "security-context-2b313a94-996f-4ebc-9ac6-d4d60792535b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010888979s STEP: Saw pod success Nov 6 01:08:57.081: INFO: Pod "security-context-2b313a94-996f-4ebc-9ac6-d4d60792535b" satisfied condition "Succeeded or Failed" Nov 6 01:08:57.084: INFO: Trying to get logs from node node2 pod security-context-2b313a94-996f-4ebc-9ac6-d4d60792535b container test-container: STEP: delete the pod Nov 6 01:08:57.095: INFO: Waiting for pod security-context-2b313a94-996f-4ebc-9ac6-d4d60792535b to disappear Nov 6 01:08:57.097: INFO: Pod security-context-2b313a94-996f-4ebc-9ac6-d4d60792535b no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:08:57.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-1929" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":5,"skipped":222,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Nov 6 01:08:57.229: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:08:38.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:477 STEP: Creating pod startup-override-1037f7d1-2922-45c2-b854-261e20360360 in namespace container-probe-3869 Nov 6 01:08:44.497: INFO: Started pod startup-override-1037f7d1-2922-45c2-b854-261e20360360 in namespace container-probe-3869 STEP: checking the pod's current state and verifying that restartCount is present Nov 6 01:08:44.499: INFO: Initial restart count of pod startup-override-1037f7d1-2922-45c2-b854-261e20360360 is 1 Nov 6 01:09:06.553: INFO: Restart count of pod container-probe-3869/startup-override-1037f7d1-2922-45c2-b854-261e20360360 is now 2 (22.053786571s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:09:06.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3869" for this suite. • [SLOW TEST:28.110 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:477 ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:08:21.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:217 STEP: Creating pod busybox-5f9c8558-c8cd-411e-b1b5-39696b0a6664 in namespace container-probe-9031 Nov 6 01:08:25.646: INFO: Started pod busybox-5f9c8558-c8cd-411e-b1b5-39696b0a6664 in namespace container-probe-9031 STEP: checking the pod's current state and verifying that restartCount is present Nov 6 01:08:25.649: INFO: Initial restart count of pod busybox-5f9c8558-c8cd-411e-b1b5-39696b0a6664 is 0 Nov 6 01:09:15.764: INFO: Restart count of pod container-probe-9031/busybox-5f9c8558-c8cd-411e-b1b5-39696b0a6664 is now 1 (50.11499628s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:09:15.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9031" for this suite. • [SLOW TEST:54.174 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:217 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":4,"skipped":706,"failed":0} Nov 6 01:09:15.779: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:07:19.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:202 [It] should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 STEP: creating pods that should always exit 1 and terminating the pod after a random delay Nov 6 01:07:24.248: INFO: watch delete seen for pod-submit-status-0-0 Nov 6 01:07:24.248: INFO: Pod pod-submit-status-0-0 on node node2 timings total=4.81319178s t=364ms run=0s execute=0s Nov 6 01:07:26.638: INFO: watch delete seen for pod-submit-status-1-0 Nov 6 01:07:26.639: INFO: Pod pod-submit-status-1-0 on node node2 timings total=7.204079787s t=1.241s run=0s execute=0s Nov 6 01:07:30.838: INFO: watch delete seen for pod-submit-status-2-0 Nov 6 01:07:30.838: INFO: Pod pod-submit-status-2-0 on node node2 timings total=11.403222632s t=1.219s run=0s execute=0s Nov 6 01:07:32.039: INFO: watch delete seen for pod-submit-status-0-1 Nov 6 01:07:32.039: INFO: Pod pod-submit-status-0-1 on node node2 timings total=7.790962962s t=1.834s run=0s execute=0s Nov 6 01:07:33.839: INFO: watch delete seen for pod-submit-status-1-1 Nov 6 01:07:33.839: INFO: Pod pod-submit-status-1-1 on node node2 timings total=7.200164329s t=682ms run=0s execute=0s Nov 6 01:07:39.441: INFO: watch delete seen for pod-submit-status-2-1 Nov 6 01:07:39.441: INFO: Pod pod-submit-status-2-1 on node node2 timings total=8.603205269s t=1.699s run=0s execute=0s Nov 6 01:07:41.246: INFO: watch delete seen for pod-submit-status-0-2 Nov 6 01:07:41.246: INFO: Pod pod-submit-status-0-2 on node node2 timings total=9.207380839s t=1.078s run=0s execute=0s Nov 6 01:07:42.437: INFO: watch delete seen for pod-submit-status-1-2 Nov 6 01:07:42.437: INFO: Pod pod-submit-status-1-2 on node node2 timings total=8.598388924s t=1.98s run=0s execute=0s Nov 6 01:07:48.840: INFO: watch delete seen for pod-submit-status-2-2 Nov 6 01:07:48.840: INFO: Pod pod-submit-status-2-2 on node node2 timings total=9.398710989s t=1.049s run=0s execute=0s Nov 6 01:07:50.036: INFO: watch delete seen for pod-submit-status-1-3 Nov 6 01:07:50.036: INFO: Pod pod-submit-status-1-3 on node node2 timings total=7.598683701s t=432ms run=0s execute=0s Nov 6 01:07:50.869: INFO: watch delete seen for pod-submit-status-0-3 Nov 6 01:07:50.869: INFO: Pod pod-submit-status-0-3 on node node2 timings total=9.622628714s t=1.848s run=3s execute=0s Nov 6 01:07:51.837: INFO: watch delete seen for pod-submit-status-2-3 Nov 6 01:07:51.837: INFO: Pod pod-submit-status-2-3 on node node2 timings total=2.99694802s t=281ms run=0s execute=0s Nov 6 01:07:54.837: INFO: watch delete seen for pod-submit-status-0-4 Nov 6 01:07:54.837: INFO: Pod pod-submit-status-0-4 on node node2 timings total=3.968041771s t=688ms run=0s execute=0s Nov 6 01:07:57.147: INFO: watch delete seen for pod-submit-status-0-5 Nov 6 01:07:57.147: INFO: Pod pod-submit-status-0-5 on node node1 timings total=2.310353657s t=676ms run=0s execute=0s Nov 6 01:07:57.170: INFO: watch delete seen for pod-submit-status-2-4 Nov 6 01:07:57.171: INFO: Pod pod-submit-status-2-4 on node node1 timings total=5.333736254s t=1.785s run=3s execute=0s Nov 6 01:07:59.149: INFO: watch delete seen for pod-submit-status-1-4 Nov 6 01:07:59.149: INFO: Pod pod-submit-status-1-4 on node node1 timings total=9.113279698s t=1.553s run=2s execute=0s Nov 6 01:08:01.580: INFO: watch delete seen for pod-submit-status-2-5 Nov 6 01:08:01.580: INFO: Pod pod-submit-status-2-5 on node node1 timings total=4.409266365s t=442ms run=0s execute=0s Nov 6 01:08:08.680: INFO: watch delete seen for pod-submit-status-1-5 Nov 6 01:08:08.680: INFO: Pod pod-submit-status-1-5 on node node1 timings total=9.530321486s t=1.408s run=0s execute=0s Nov 6 01:08:08.690: INFO: watch delete seen for pod-submit-status-0-6 Nov 6 01:08:08.690: INFO: Pod pod-submit-status-0-6 on node node1 timings total=11.542811018s t=1.985s run=0s execute=0s Nov 6 01:08:08.812: INFO: watch delete seen for pod-submit-status-2-6 Nov 6 01:08:08.812: INFO: Pod pod-submit-status-2-6 on node node2 timings total=7.231978158s t=89ms run=0s execute=0s Nov 6 01:08:18.679: INFO: watch delete seen for pod-submit-status-0-7 Nov 6 01:08:18.679: INFO: Pod pod-submit-status-0-7 on node node1 timings total=9.988616144s t=1.316s run=0s execute=0s Nov 6 01:08:18.688: INFO: watch delete seen for pod-submit-status-2-7 Nov 6 01:08:18.688: INFO: Pod pod-submit-status-2-7 on node node1 timings total=9.875975479s t=1.238s run=0s execute=0s Nov 6 01:08:18.697: INFO: watch delete seen for pod-submit-status-1-6 Nov 6 01:08:18.697: INFO: Pod pod-submit-status-1-6 on node node1 timings total=10.016866978s t=714ms run=0s execute=0s Nov 6 01:08:24.908: INFO: watch delete seen for pod-submit-status-0-8 Nov 6 01:08:24.908: INFO: Pod pod-submit-status-0-8 on node node2 timings total=6.229018537s t=754ms run=0s execute=0s Nov 6 01:08:30.511: INFO: watch delete seen for pod-submit-status-2-8 Nov 6 01:08:30.511: INFO: Pod pod-submit-status-2-8 on node node2 timings total=11.823464955s t=792ms run=0s execute=0s Nov 6 01:08:38.471: INFO: watch delete seen for pod-submit-status-2-9 Nov 6 01:08:38.471: INFO: Pod pod-submit-status-2-9 on node node2 timings total=7.959788744s t=1.774s run=0s execute=0s Nov 6 01:08:39.061: INFO: watch delete seen for pod-submit-status-0-9 Nov 6 01:08:39.061: INFO: Pod pod-submit-status-0-9 on node node2 timings total=14.152766434s t=1.534s run=0s execute=0s Nov 6 01:08:39.858: INFO: watch delete seen for pod-submit-status-1-7 Nov 6 01:08:39.858: INFO: Pod pod-submit-status-1-7 on node node2 timings total=21.161595044s t=382ms run=0s execute=0s Nov 6 01:08:44.487: INFO: watch delete seen for pod-submit-status-2-10 Nov 6 01:08:44.487: INFO: Pod pod-submit-status-2-10 on node node2 timings total=6.015680203s t=222ms run=0s execute=0s Nov 6 01:08:48.676: INFO: watch delete seen for pod-submit-status-0-10 Nov 6 01:08:48.676: INFO: Pod pod-submit-status-0-10 on node node1 timings total=9.615291677s t=1.12s run=0s execute=0s Nov 6 01:08:48.741: INFO: watch delete seen for pod-submit-status-1-8 Nov 6 01:08:48.742: INFO: Pod pod-submit-status-1-8 on node node2 timings total=8.883139035s t=1.789s run=0s execute=0s Nov 6 01:08:58.683: INFO: watch delete seen for pod-submit-status-2-11 Nov 6 01:08:58.683: INFO: Pod pod-submit-status-2-11 on node node1 timings total=14.196364495s t=523ms run=0s execute=0s Nov 6 01:08:58.693: INFO: watch delete seen for pod-submit-status-0-11 Nov 6 01:08:58.693: INFO: Pod pod-submit-status-0-11 on node node1 timings total=10.017221061s t=830ms run=0s execute=0s Nov 6 01:08:58.749: INFO: watch delete seen for pod-submit-status-1-9 Nov 6 01:08:58.749: INFO: Pod pod-submit-status-1-9 on node node2 timings total=10.007311811s t=162ms run=0s execute=0s Nov 6 01:09:01.615: INFO: watch delete seen for pod-submit-status-1-10 Nov 6 01:09:01.615: INFO: Pod pod-submit-status-1-10 on node node2 timings total=2.865845802s t=682ms run=0s execute=0s Nov 6 01:09:08.684: INFO: watch delete seen for pod-submit-status-0-12 Nov 6 01:09:08.684: INFO: Pod pod-submit-status-0-12 on node node1 timings total=9.990607584s t=19ms run=0s execute=0s Nov 6 01:09:08.747: INFO: watch delete seen for pod-submit-status-2-12 Nov 6 01:09:08.747: INFO: Pod pod-submit-status-2-12 on node node2 timings total=10.063929951s t=1.49s run=0s execute=0s Nov 6 01:09:08.759: INFO: watch delete seen for pod-submit-status-1-11 Nov 6 01:09:08.760: INFO: Pod pod-submit-status-1-11 on node node2 timings total=7.14470394s t=1.738s run=3s execute=0s Nov 6 01:09:14.992: INFO: watch delete seen for pod-submit-status-1-12 Nov 6 01:09:14.992: INFO: Pod pod-submit-status-1-12 on node node1 timings total=6.232865637s t=1.503s run=0s execute=0s Nov 6 01:09:18.678: INFO: watch delete seen for pod-submit-status-2-13 Nov 6 01:09:18.678: INFO: Pod pod-submit-status-2-13 on node node1 timings total=9.930573705s t=849ms run=0s execute=0s Nov 6 01:09:18.772: INFO: watch delete seen for pod-submit-status-0-13 Nov 6 01:09:18.772: INFO: Pod pod-submit-status-0-13 on node node2 timings total=10.087770597s t=1.969s run=2s execute=0s Nov 6 01:09:21.584: INFO: watch delete seen for pod-submit-status-1-13 Nov 6 01:09:21.584: INFO: Pod pod-submit-status-1-13 on node node1 timings total=6.591400798s t=1.396s run=0s execute=0s Nov 6 01:09:21.593: INFO: watch delete seen for pod-submit-status-2-14 Nov 6 01:09:21.593: INFO: Pod pod-submit-status-2-14 on node node1 timings total=2.91453552s t=676ms run=0s execute=0s Nov 6 01:09:28.682: INFO: watch delete seen for pod-submit-status-0-14 Nov 6 01:09:28.682: INFO: Pod pod-submit-status-0-14 on node node1 timings total=9.910036024s t=1.83s run=0s execute=0s Nov 6 01:09:39.394: INFO: watch delete seen for pod-submit-status-1-14 Nov 6 01:09:39.394: INFO: Pod pod-submit-status-1-14 on node node2 timings total=17.809969374s t=1.086s run=0s execute=0s [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:09:39.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9823" for this suite. • [SLOW TEST:139.988 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:200 should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pod Container Status should never report success for a pending container","total":-1,"completed":5,"skipped":579,"failed":0} Nov 6 01:09:39.405: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:08:33.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:313 STEP: Creating pod startup-70755378-314e-486c-ad82-5c9f67a8026f in namespace container-probe-8779 Nov 6 01:08:37.191: INFO: Started pod startup-70755378-314e-486c-ad82-5c9f67a8026f in namespace container-probe-8779 STEP: checking the pod's current state and verifying that restartCount is present Nov 6 01:08:37.193: INFO: Initial restart count of pod startup-70755378-314e-486c-ad82-5c9f67a8026f is 0 Nov 6 01:09:45.334: INFO: Restart count of pod container-probe-8779/startup-70755378-314e-486c-ad82-5c9f67a8026f is now 1 (1m8.141606451s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:09:45.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8779" for this suite. • [SLOW TEST:72.193 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:313 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted startup probe fails","total":-1,"completed":4,"skipped":209,"failed":0} Nov 6 01:09:45.349: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:06:54.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:342 STEP: Creating pod startup-67709499-b08b-48fb-8a2d-dbfb707ee467 in namespace container-probe-3410 Nov 6 01:07:02.900: INFO: Started pod startup-67709499-b08b-48fb-8a2d-dbfb707ee467 in namespace container-probe-3410 STEP: checking the pod's current state and verifying that restartCount is present Nov 6 01:07:02.902: INFO: Initial restart count of pod startup-67709499-b08b-48fb-8a2d-dbfb707ee467 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:11:03.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3410" for this suite. • [SLOW TEST:248.616 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:342 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it","total":-1,"completed":2,"skipped":143,"failed":0} Nov 6 01:11:03.478: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:06:59.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 STEP: Creating pod liveness-5bc19d7f-6e2d-4d9d-819f-09f0111af089 in namespace container-probe-7552 Nov 6 01:07:15.158: INFO: Started pod liveness-5bc19d7f-6e2d-4d9d-819f-09f0111af089 in namespace container-probe-7552 STEP: checking the pod's current state and verifying that restartCount is present Nov 6 01:07:15.161: INFO: Initial restart count of pod liveness-5bc19d7f-6e2d-4d9d-819f-09f0111af089 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:11:15.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7552" for this suite. • [SLOW TEST:256.566 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe","total":-1,"completed":3,"skipped":270,"failed":0} Nov 6 01:11:15.690: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:08:33.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 STEP: wait until node is ready Nov 6 01:08:33.727: INFO: Waiting up to 5m0s for node node1 condition Ready to be true STEP: wait until there is node lease STEP: verify NodeStatus report period is longer than lease duration Nov 6 01:08:34.738: INFO: node status heartbeat is unchanged for 1.004597182s, waiting for 1m20s Nov 6 01:08:35.738: INFO: node status heartbeat is unchanged for 2.004594141s, waiting for 1m20s Nov 6 01:08:36.737: INFO: node status heartbeat is unchanged for 3.003862693s, waiting for 1m20s Nov 6 01:08:37.736: INFO: node status heartbeat is unchanged for 4.00272409s, waiting for 1m20s Nov 6 01:08:38.739: INFO: node status heartbeat is unchanged for 5.005671667s, waiting for 1m20s Nov 6 01:08:39.737: INFO: node status heartbeat is unchanged for 6.00339983s, waiting for 1m20s Nov 6 01:08:40.738: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 6 01:08:40.743: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:08:30 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:08:40 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:08:30 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:08:40 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:08:30 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:08:40 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-05 21:01:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 6 01:08:41.738: INFO: node status heartbeat is unchanged for 1.000805278s, waiting for 1m20s Nov 6 01:08:42.736: INFO: node status heartbeat is unchanged for 1.998802744s, waiting for 1m20s Nov 6 01:08:43.738: INFO: node status heartbeat is unchanged for 3.000340073s, waiting for 1m20s Nov 6 01:08:44.738: INFO: node status heartbeat is unchanged for 4.000128178s, waiting for 1m20s Nov 6 01:08:45.738: INFO: node status heartbeat is unchanged for 5.000546705s, waiting for 1m20s Nov 6 01:08:46.737: INFO: node status heartbeat is unchanged for 5.999805446s, waiting for 1m20s Nov 6 01:08:47.737: INFO: node status heartbeat is unchanged for 6.999132093s, waiting for 1m20s Nov 6 01:08:48.737: INFO: node status heartbeat is unchanged for 7.99977633s, waiting for 1m20s Nov 6 01:08:49.737: INFO: node status heartbeat is unchanged for 8.999806588s, waiting for 1m20s Nov 6 01:08:50.738: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 6 01:08:50.743: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:08:40 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:08:50 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:08:40 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:08:50 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:08:40 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:08:50 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-05 21:01:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 6 01:08:51.737: INFO: node status heartbeat is unchanged for 998.701865ms, waiting for 1m20s Nov 6 01:08:52.740: INFO: node status heartbeat is unchanged for 2.002185764s, waiting for 1m20s Nov 6 01:08:53.739: INFO: node status heartbeat is unchanged for 3.000575362s, waiting for 1m20s Nov 6 01:08:54.739: INFO: node status heartbeat is unchanged for 4.000892218s, waiting for 1m20s Nov 6 01:08:55.739: INFO: node status heartbeat is unchanged for 5.000608624s, waiting for 1m20s Nov 6 01:08:56.737: INFO: node status heartbeat is unchanged for 5.999306583s, waiting for 1m20s Nov 6 01:08:57.737: INFO: node status heartbeat is unchanged for 6.999554522s, waiting for 1m20s Nov 6 01:08:58.738: INFO: node status heartbeat is unchanged for 7.99989961s, waiting for 1m20s Nov 6 01:08:59.738: INFO: node status heartbeat is unchanged for 8.999603132s, waiting for 1m20s Nov 6 01:09:00.739: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 6 01:09:00.743: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:08:50 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:09:00 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:08:50 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:09:00 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:08:50 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:09:00 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-05 21:01:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 6 01:09:01.739: INFO: node status heartbeat is unchanged for 1.000198635s, waiting for 1m20s Nov 6 01:09:02.738: INFO: node status heartbeat is unchanged for 1.999158738s, waiting for 1m20s Nov 6 01:09:03.738: INFO: node status heartbeat is unchanged for 2.998960219s, waiting for 1m20s Nov 6 01:09:04.737: INFO: node status heartbeat is unchanged for 3.998411841s, waiting for 1m20s Nov 6 01:09:05.737: INFO: node status heartbeat is unchanged for 4.998582021s, waiting for 1m20s Nov 6 01:09:06.737: INFO: node status heartbeat is unchanged for 5.998429622s, waiting for 1m20s Nov 6 01:09:07.737: INFO: node status heartbeat is unchanged for 6.99835399s, waiting for 1m20s Nov 6 01:09:08.738: INFO: node status heartbeat is unchanged for 7.998980287s, waiting for 1m20s Nov 6 01:09:09.737: INFO: node status heartbeat is unchanged for 8.99796935s, waiting for 1m20s Nov 6 01:09:10.739: INFO: node status heartbeat is unchanged for 10.000446597s, waiting for 1m20s Nov 6 01:09:11.737: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s Nov 6 01:09:11.742: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:09:00 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:09:11 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:09:00 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:09:11 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:09:00 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:09:11 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-05 21:01:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 6 01:09:12.737: INFO: node status heartbeat is unchanged for 1.000496135s, waiting for 1m20s Nov 6 01:09:13.740: INFO: node status heartbeat is unchanged for 2.00277048s, waiting for 1m20s Nov 6 01:09:14.739: INFO: node status heartbeat is unchanged for 3.002041614s, waiting for 1m20s Nov 6 01:09:15.736: INFO: node status heartbeat is unchanged for 3.999441279s, waiting for 1m20s Nov 6 01:09:16.737: INFO: node status heartbeat is unchanged for 5.000346394s, waiting for 1m20s Nov 6 01:09:17.736: INFO: node status heartbeat is unchanged for 5.999465168s, waiting for 1m20s Nov 6 01:09:18.738: INFO: node status heartbeat is unchanged for 7.000708319s, waiting for 1m20s Nov 6 01:09:19.740: INFO: node status heartbeat is unchanged for 8.003042507s, waiting for 1m20s Nov 6 01:09:20.738: INFO: node status heartbeat is unchanged for 9.000783239s, waiting for 1m20s Nov 6 01:09:21.739: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 6 01:09:21.743: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:09:11 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:09:21 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:09:11 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:09:21 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:09:11 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:09:21 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-05 21:01:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 6 01:09:22.737: INFO: node status heartbeat is unchanged for 997.926746ms, waiting for 1m20s Nov 6 01:09:23.739: INFO: node status heartbeat is unchanged for 1.99983935s, waiting for 1m20s Nov 6 01:09:24.741: INFO: node status heartbeat is unchanged for 3.002152627s, waiting for 1m20s Nov 6 01:09:25.737: INFO: node status heartbeat is unchanged for 3.998644664s, waiting for 1m20s Nov 6 01:09:26.740: INFO: node status heartbeat is unchanged for 5.00111549s, waiting for 1m20s Nov 6 01:09:27.738: INFO: node status heartbeat is unchanged for 5.999047635s, waiting for 1m20s Nov 6 01:09:28.740: INFO: node status heartbeat is unchanged for 7.001213337s, waiting for 1m20s Nov 6 01:09:29.739: INFO: node status heartbeat is unchanged for 7.999733548s, waiting for 1m20s Nov 6 01:09:30.738: INFO: node status heartbeat is unchanged for 8.998892389s, waiting for 1m20s Nov 6 01:09:31.738: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 6 01:09:31.742: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:09:21 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:09:31 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:09:21 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:09:31 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:09:21 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:09:31 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-05 21:01:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 6 01:09:32.738: INFO: node status heartbeat is unchanged for 1.000044349s, waiting for 1m20s Nov 6 01:09:33.737: INFO: node status heartbeat is unchanged for 1.999735364s, waiting for 1m20s Nov 6 01:09:34.739: INFO: node status heartbeat is unchanged for 3.001049821s, waiting for 1m20s Nov 6 01:09:35.738: INFO: node status heartbeat is unchanged for 3.999842013s, waiting for 1m20s Nov 6 01:09:36.738: INFO: node status heartbeat is unchanged for 5.000731395s, waiting for 1m20s Nov 6 01:09:37.737: INFO: node status heartbeat is unchanged for 5.999585561s, waiting for 1m20s Nov 6 01:09:38.737: INFO: node status heartbeat is unchanged for 6.999685322s, waiting for 1m20s Nov 6 01:09:39.737: INFO: node status heartbeat is unchanged for 7.999288692s, waiting for 1m20s Nov 6 01:09:40.737: INFO: node status heartbeat is unchanged for 8.999802258s, waiting for 1m20s Nov 6 01:09:41.738: INFO: node status heartbeat is unchanged for 9.999947915s, waiting for 1m20s Nov 6 01:09:42.738: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 6 01:09:42.742: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:09:31 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:09:41 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:09:31 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:09:41 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:09:31 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:09:41 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-05 21:01:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 6 01:09:43.738: INFO: node status heartbeat is unchanged for 1.000698061s, waiting for 1m20s Nov 6 01:09:44.738: INFO: node status heartbeat is unchanged for 1.999938038s, waiting for 1m20s Nov 6 01:09:45.738: INFO: node status heartbeat is unchanged for 3.000731405s, waiting for 1m20s Nov 6 01:09:46.739: INFO: node status heartbeat is unchanged for 4.001115555s, waiting for 1m20s Nov 6 01:09:47.738: INFO: node status heartbeat is unchanged for 5.000549804s, waiting for 1m20s Nov 6 01:09:48.737: INFO: node status heartbeat is unchanged for 5.999013463s, waiting for 1m20s Nov 6 01:09:49.738: INFO: node status heartbeat is unchanged for 7.00017076s, waiting for 1m20s Nov 6 01:09:50.736: INFO: node status heartbeat is unchanged for 7.998812217s, waiting for 1m20s Nov 6 01:09:51.737: INFO: node status heartbeat is unchanged for 8.999845849s, waiting for 1m20s Nov 6 01:09:52.737: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 6 01:09:52.742: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:09:41 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:09:51 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:09:41 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:09:51 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:09:41 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:09:51 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-05 21:01:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 6 01:09:53.737: INFO: node status heartbeat is unchanged for 999.510415ms, waiting for 1m20s Nov 6 01:09:54.737: INFO: node status heartbeat is unchanged for 1.999894053s, waiting for 1m20s Nov 6 01:09:55.739: INFO: node status heartbeat is unchanged for 3.001336837s, waiting for 1m20s Nov 6 01:09:56.737: INFO: node status heartbeat is unchanged for 3.999966068s, waiting for 1m20s Nov 6 01:09:57.738: INFO: node status heartbeat is unchanged for 5.000946576s, waiting for 1m20s Nov 6 01:09:58.741: INFO: node status heartbeat is unchanged for 6.003342706s, waiting for 1m20s Nov 6 01:09:59.739: INFO: node status heartbeat is unchanged for 7.002229342s, waiting for 1m20s Nov 6 01:10:00.741: INFO: node status heartbeat is unchanged for 8.003615111s, waiting for 1m20s Nov 6 01:10:01.737: INFO: node status heartbeat is unchanged for 8.999815109s, waiting for 1m20s Nov 6 01:10:02.738: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 6 01:10:02.743: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:09:51 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:10:01 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:09:51 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:10:01 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:09:51 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:10:01 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-05 21:01:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 6 01:10:03.738: INFO: node status heartbeat is unchanged for 999.411715ms, waiting for 1m20s Nov 6 01:10:04.737: INFO: node status heartbeat is unchanged for 1.999090358s, waiting for 1m20s Nov 6 01:10:05.738: INFO: node status heartbeat is unchanged for 2.999761853s, waiting for 1m20s Nov 6 01:10:06.738: INFO: node status heartbeat is unchanged for 3.999792669s, waiting for 1m20s Nov 6 01:10:07.738: INFO: node status heartbeat is unchanged for 4.999401101s, waiting for 1m20s Nov 6 01:10:08.739: INFO: node status heartbeat is unchanged for 6.000385466s, waiting for 1m20s Nov 6 01:10:09.737: INFO: node status heartbeat is unchanged for 6.998492319s, waiting for 1m20s Nov 6 01:10:10.738: INFO: node status heartbeat is unchanged for 7.999235374s, waiting for 1m20s Nov 6 01:10:11.738: INFO: node status heartbeat is unchanged for 8.999260675s, waiting for 1m20s Nov 6 01:10:12.738: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 6 01:10:12.742: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:10:01 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:10:11 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:10:01 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:10:11 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:10:01 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:10:11 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-05 21:01:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 6 01:10:13.738: INFO: node status heartbeat is unchanged for 999.976879ms, waiting for 1m20s Nov 6 01:10:14.738: INFO: node status heartbeat is unchanged for 1.999983422s, waiting for 1m20s Nov 6 01:10:15.738: INFO: node status heartbeat is unchanged for 3.000159264s, waiting for 1m20s Nov 6 01:10:16.738: INFO: node status heartbeat is unchanged for 4.000112985s, waiting for 1m20s Nov 6 01:10:17.737: INFO: node status heartbeat is unchanged for 4.999566211s, waiting for 1m20s Nov 6 01:10:18.739: INFO: node status heartbeat is unchanged for 6.000886263s, waiting for 1m20s Nov 6 01:10:19.740: INFO: node status heartbeat is unchanged for 7.001694009s, waiting for 1m20s Nov 6 01:10:20.739: INFO: node status heartbeat is unchanged for 8.001522413s, waiting for 1m20s Nov 6 01:10:21.738: INFO: node status heartbeat is unchanged for 9.000109069s, waiting for 1m20s Nov 6 01:10:22.738: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 6 01:10:22.743: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:10:11 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:10:21 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:10:11 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:10:21 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:10:11 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:10:21 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-05 21:01:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 6 01:10:23.738: INFO: node status heartbeat is unchanged for 1.000407326s, waiting for 1m20s Nov 6 01:10:24.738: INFO: node status heartbeat is unchanged for 2.000413144s, waiting for 1m20s Nov 6 01:10:25.737: INFO: node status heartbeat is unchanged for 2.999232482s, waiting for 1m20s Nov 6 01:10:26.764: INFO: node status heartbeat is unchanged for 4.026798215s, waiting for 1m20s Nov 6 01:10:27.737: INFO: node status heartbeat is unchanged for 4.999510802s, waiting for 1m20s Nov 6 01:10:28.740: INFO: node status heartbeat is unchanged for 6.002179057s, waiting for 1m20s Nov 6 01:10:29.738: INFO: node status heartbeat is unchanged for 7.000630683s, waiting for 1m20s Nov 6 01:10:30.740: INFO: node status heartbeat is unchanged for 8.002224454s, waiting for 1m20s Nov 6 01:10:31.739: INFO: node status heartbeat is unchanged for 9.001769277s, waiting for 1m20s Nov 6 01:10:32.739: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s Nov 6 01:10:32.743: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:10:21 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:10:32 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:10:21 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:10:32 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:10:21 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:10:32 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-05 21:01:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 6 01:10:33.738: INFO: node status heartbeat is unchanged for 999.096863ms, waiting for 1m20s Nov 6 01:10:34.738: INFO: node status heartbeat is unchanged for 1.999466084s, waiting for 1m20s Nov 6 01:10:35.738: INFO: node status heartbeat is unchanged for 2.999439581s, waiting for 1m20s Nov 6 01:10:36.738: INFO: node status heartbeat is unchanged for 3.999179011s, waiting for 1m20s Nov 6 01:10:37.738: INFO: node status heartbeat is unchanged for 4.999399981s, waiting for 1m20s Nov 6 01:10:38.738: INFO: node status heartbeat is unchanged for 5.999829164s, waiting for 1m20s Nov 6 01:10:39.738: INFO: node status heartbeat is unchanged for 6.999174596s, waiting for 1m20s Nov 6 01:10:40.738: INFO: node status heartbeat is unchanged for 7.999438229s, waiting for 1m20s Nov 6 01:10:41.737: INFO: node status heartbeat is unchanged for 8.998994573s, waiting for 1m20s Nov 6 01:10:42.738: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 6 01:10:42.743: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:10:32 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:10:42 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:10:32 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:10:42 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:10:32 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:10:42 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-05 21:01:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 6 01:10:43.738: INFO: node status heartbeat is unchanged for 999.657121ms, waiting for 1m20s Nov 6 01:10:44.736: INFO: node status heartbeat is unchanged for 1.997942678s, waiting for 1m20s Nov 6 01:10:45.737: INFO: node status heartbeat is unchanged for 2.999131168s, waiting for 1m20s Nov 6 01:10:46.738: INFO: node status heartbeat is unchanged for 4.000197736s, waiting for 1m20s Nov 6 01:10:47.737: INFO: node status heartbeat is unchanged for 4.998904185s, waiting for 1m20s Nov 6 01:10:48.737: INFO: node status heartbeat is unchanged for 5.998935115s, waiting for 1m20s Nov 6 01:10:49.739: INFO: node status heartbeat is unchanged for 7.000326969s, waiting for 1m20s Nov 6 01:10:50.738: INFO: node status heartbeat is unchanged for 7.999300569s, waiting for 1m20s Nov 6 01:10:51.738: INFO: node status heartbeat is unchanged for 8.999902419s, waiting for 1m20s Nov 6 01:10:52.739: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 6 01:10:52.744: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:10:42 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:10:52 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:10:42 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:10:52 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:10:42 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:10:52 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-05 21:01:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 6 01:10:53.739: INFO: node status heartbeat is unchanged for 1.000109727s, waiting for 1m20s Nov 6 01:10:54.737: INFO: node status heartbeat is unchanged for 1.998984172s, waiting for 1m20s Nov 6 01:10:55.740: INFO: node status heartbeat is unchanged for 3.001711741s, waiting for 1m20s Nov 6 01:10:56.738: INFO: node status heartbeat is unchanged for 3.999786322s, waiting for 1m20s Nov 6 01:10:57.738: INFO: node status heartbeat is unchanged for 4.999139753s, waiting for 1m20s Nov 6 01:10:58.738: INFO: node status heartbeat is unchanged for 5.999789313s, waiting for 1m20s Nov 6 01:10:59.738: INFO: node status heartbeat is unchanged for 6.999697588s, waiting for 1m20s Nov 6 01:11:00.738: INFO: node status heartbeat is unchanged for 7.999166712s, waiting for 1m20s Nov 6 01:11:01.739: INFO: node status heartbeat is unchanged for 9.000841813s, waiting for 1m20s Nov 6 01:11:02.737: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 6 01:11:02.742: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:10:52 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:11:02 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:10:52 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:11:02 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:10:52 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:11:02 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-05 21:01:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 6 01:11:03.737: INFO: node status heartbeat is unchanged for 1.000421458s, waiting for 1m20s Nov 6 01:11:04.738: INFO: node status heartbeat is unchanged for 2.000629206s, waiting for 1m20s Nov 6 01:11:05.738: INFO: node status heartbeat is unchanged for 3.00112054s, waiting for 1m20s Nov 6 01:11:06.738: INFO: node status heartbeat is unchanged for 4.000681655s, waiting for 1m20s Nov 6 01:11:07.737: INFO: node status heartbeat is unchanged for 4.999924396s, waiting for 1m20s Nov 6 01:11:08.737: INFO: node status heartbeat is unchanged for 5.999946206s, waiting for 1m20s Nov 6 01:11:09.738: INFO: node status heartbeat is unchanged for 7.000829297s, waiting for 1m20s Nov 6 01:11:10.739: INFO: node status heartbeat is unchanged for 8.00199928s, waiting for 1m20s Nov 6 01:11:11.738: INFO: node status heartbeat is unchanged for 9.001062443s, waiting for 1m20s Nov 6 01:11:12.737: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 6 01:11:12.741: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:11:02 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:11:12 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:11:02 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:11:12 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:11:02 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:11:12 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-05 21:01:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 6 01:11:13.739: INFO: node status heartbeat is unchanged for 1.001897612s, waiting for 1m20s Nov 6 01:11:14.737: INFO: node status heartbeat is unchanged for 2.000658937s, waiting for 1m20s Nov 6 01:11:15.737: INFO: node status heartbeat is unchanged for 3.000733544s, waiting for 1m20s Nov 6 01:11:16.740: INFO: node status heartbeat is unchanged for 4.002840191s, waiting for 1m20s Nov 6 01:11:17.737: INFO: node status heartbeat is unchanged for 5.000716316s, waiting for 1m20s Nov 6 01:11:18.740: INFO: node status heartbeat is unchanged for 6.003002087s, waiting for 1m20s Nov 6 01:11:19.738: INFO: node status heartbeat is unchanged for 7.000952193s, waiting for 1m20s Nov 6 01:11:20.740: INFO: node status heartbeat is unchanged for 8.003079118s, waiting for 1m20s Nov 6 01:11:21.738: INFO: node status heartbeat is unchanged for 9.001387269s, waiting for 1m20s Nov 6 01:11:22.738: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 6 01:11:22.742: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:11:12 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:11:22 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:11:12 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:11:22 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:11:12 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:11:22 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-05 21:01:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 6 01:11:23.738: INFO: node status heartbeat is unchanged for 1.000375217s, waiting for 1m20s Nov 6 01:11:24.739: INFO: node status heartbeat is unchanged for 2.001237359s, waiting for 1m20s Nov 6 01:11:25.738: INFO: node status heartbeat is unchanged for 3.000395584s, waiting for 1m20s Nov 6 01:11:26.740: INFO: node status heartbeat is unchanged for 4.002604973s, waiting for 1m20s Nov 6 01:11:27.736: INFO: node status heartbeat is unchanged for 4.998935329s, waiting for 1m20s Nov 6 01:11:28.738: INFO: node status heartbeat is unchanged for 6.000887114s, waiting for 1m20s Nov 6 01:11:29.739: INFO: node status heartbeat is unchanged for 7.001923516s, waiting for 1m20s Nov 6 01:11:30.740: INFO: node status heartbeat is unchanged for 8.002435768s, waiting for 1m20s Nov 6 01:11:31.738: INFO: node status heartbeat is unchanged for 9.000889959s, waiting for 1m20s Nov 6 01:11:32.738: INFO: node status heartbeat is unchanged for 10.000405905s, waiting for 1m20s Nov 6 01:11:33.738: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 6 01:11:33.743: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:11:22 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:11:32 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:11:22 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:11:32 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:11:22 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:11:32 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-05 21:01:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 6 01:11:34.739: INFO: node status heartbeat is unchanged for 1.000636178s, waiting for 1m20s Nov 6 01:11:35.739: INFO: node status heartbeat is unchanged for 2.000949754s, waiting for 1m20s Nov 6 01:11:36.740: INFO: node status heartbeat is unchanged for 3.002027604s, waiting for 1m20s Nov 6 01:11:37.737: INFO: node status heartbeat is unchanged for 3.998592042s, waiting for 1m20s Nov 6 01:11:38.739: INFO: node status heartbeat is unchanged for 5.000497006s, waiting for 1m20s Nov 6 01:11:39.738: INFO: node status heartbeat is unchanged for 5.999951541s, waiting for 1m20s Nov 6 01:11:40.739: INFO: node status heartbeat is unchanged for 7.00087586s, waiting for 1m20s Nov 6 01:11:41.739: INFO: node status heartbeat is unchanged for 8.000992015s, waiting for 1m20s Nov 6 01:11:42.739: INFO: node status heartbeat is unchanged for 9.000268219s, waiting for 1m20s Nov 6 01:11:43.740: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 6 01:11:43.744: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:11:32 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:11:42 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:11:32 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:11:42 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:11:32 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:11:42 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-05 21:01:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 6 01:11:44.740: INFO: node status heartbeat is unchanged for 1.000308715s, waiting for 1m20s Nov 6 01:11:45.739: INFO: node status heartbeat is unchanged for 1.999361298s, waiting for 1m20s Nov 6 01:11:46.739: INFO: node status heartbeat is unchanged for 2.999204099s, waiting for 1m20s Nov 6 01:11:47.739: INFO: node status heartbeat is unchanged for 3.999377934s, waiting for 1m20s Nov 6 01:11:48.739: INFO: node status heartbeat is unchanged for 4.999147488s, waiting for 1m20s Nov 6 01:11:49.738: INFO: node status heartbeat is unchanged for 5.998764357s, waiting for 1m20s Nov 6 01:11:50.738: INFO: node status heartbeat is unchanged for 6.99809207s, waiting for 1m20s Nov 6 01:11:51.737: INFO: node status heartbeat is unchanged for 7.997905967s, waiting for 1m20s Nov 6 01:11:52.737: INFO: node status heartbeat is unchanged for 8.997433924s, waiting for 1m20s Nov 6 01:11:53.737: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 6 01:11:53.742: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:11:42 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:11:52 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:11:42 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:11:52 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:11:42 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:11:52 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-05 21:01:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 6 01:11:54.738: INFO: node status heartbeat is unchanged for 1.000919171s, waiting for 1m20s Nov 6 01:11:55.738: INFO: node status heartbeat is unchanged for 2.000591629s, waiting for 1m20s Nov 6 01:11:56.738: INFO: node status heartbeat is unchanged for 3.001437965s, waiting for 1m20s Nov 6 01:11:57.739: INFO: node status heartbeat is unchanged for 4.0018029s, waiting for 1m20s Nov 6 01:11:58.737: INFO: node status heartbeat is unchanged for 4.999788945s, waiting for 1m20s Nov 6 01:11:59.738: INFO: node status heartbeat is unchanged for 6.000756843s, waiting for 1m20s Nov 6 01:12:00.738: INFO: node status heartbeat is unchanged for 7.001068554s, waiting for 1m20s Nov 6 01:12:01.737: INFO: node status heartbeat is unchanged for 8.00053406s, waiting for 1m20s Nov 6 01:12:02.737: INFO: node status heartbeat is unchanged for 8.999723282s, waiting for 1m20s Nov 6 01:12:03.738: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 6 01:12:03.743: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:11:52 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:12:02 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:11:52 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:12:02 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:11:52 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:12:02 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-05 21:01:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 6 01:12:04.737: INFO: node status heartbeat is unchanged for 998.827202ms, waiting for 1m20s Nov 6 01:12:05.739: INFO: node status heartbeat is unchanged for 2.001406621s, waiting for 1m20s Nov 6 01:12:06.738: INFO: node status heartbeat is unchanged for 3.000277869s, waiting for 1m20s Nov 6 01:12:07.737: INFO: node status heartbeat is unchanged for 3.999370933s, waiting for 1m20s Nov 6 01:12:08.738: INFO: node status heartbeat is unchanged for 4.99986189s, waiting for 1m20s Nov 6 01:12:09.737: INFO: node status heartbeat is unchanged for 5.999646434s, waiting for 1m20s Nov 6 01:12:10.738: INFO: node status heartbeat is unchanged for 7.000586559s, waiting for 1m20s Nov 6 01:12:11.740: INFO: node status heartbeat is unchanged for 8.002021322s, waiting for 1m20s Nov 6 01:12:12.738: INFO: node status heartbeat is unchanged for 8.999763086s, waiting for 1m20s Nov 6 01:12:13.738: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s Nov 6 01:12:13.742: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:12:02 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:12:13 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:12:02 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:12:13 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:12:02 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:12:13 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-05 21:01:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 6 01:12:14.737: INFO: node status heartbeat is unchanged for 999.733538ms, waiting for 1m20s Nov 6 01:12:15.737: INFO: node status heartbeat is unchanged for 1.999063968s, waiting for 1m20s Nov 6 01:12:16.740: INFO: node status heartbeat is unchanged for 3.002071109s, waiting for 1m20s Nov 6 01:12:17.738: INFO: node status heartbeat is unchanged for 4.000702133s, waiting for 1m20s Nov 6 01:12:18.737: INFO: node status heartbeat is unchanged for 4.99922236s, waiting for 1m20s Nov 6 01:12:19.737: INFO: node status heartbeat is unchanged for 5.999847175s, waiting for 1m20s Nov 6 01:12:20.737: INFO: node status heartbeat is unchanged for 6.999835983s, waiting for 1m20s Nov 6 01:12:21.738: INFO: node status heartbeat is unchanged for 8.000658022s, waiting for 1m20s Nov 6 01:12:22.738: INFO: node status heartbeat is unchanged for 9.000136202s, waiting for 1m20s Nov 6 01:12:23.738: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 6 01:12:23.743: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:12:13 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:12:23 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:12:13 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:12:23 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:12:13 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:12:23 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-05 21:01:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 6 01:12:24.739: INFO: node status heartbeat is unchanged for 1.000888777s, waiting for 1m20s Nov 6 01:12:25.741: INFO: node status heartbeat is unchanged for 2.002747575s, waiting for 1m20s Nov 6 01:12:26.738: INFO: node status heartbeat is unchanged for 3.000635154s, waiting for 1m20s Nov 6 01:12:27.738: INFO: node status heartbeat is unchanged for 3.999781885s, waiting for 1m20s Nov 6 01:12:28.738: INFO: node status heartbeat is unchanged for 5.000014964s, waiting for 1m20s Nov 6 01:12:29.737: INFO: node status heartbeat is unchanged for 5.998945036s, waiting for 1m20s Nov 6 01:12:30.737: INFO: node status heartbeat is unchanged for 6.999500261s, waiting for 1m20s Nov 6 01:12:31.738: INFO: node status heartbeat is unchanged for 8.00056209s, waiting for 1m20s Nov 6 01:12:32.738: INFO: node status heartbeat is unchanged for 9.000113562s, waiting for 1m20s Nov 6 01:12:33.738: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 6 01:12:33.742: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:12:23 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:12:33 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:12:23 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:12:33 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:12:23 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:12:33 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-05 21:01:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 6 01:12:34.737: INFO: node status heartbeat is unchanged for 999.272349ms, waiting for 1m20s Nov 6 01:12:35.739: INFO: node status heartbeat is unchanged for 2.001102673s, waiting for 1m20s Nov 6 01:12:36.740: INFO: node status heartbeat is unchanged for 3.002564014s, waiting for 1m20s Nov 6 01:12:37.737: INFO: node status heartbeat is unchanged for 3.999259625s, waiting for 1m20s Nov 6 01:12:38.739: INFO: node status heartbeat is unchanged for 5.001171548s, waiting for 1m20s Nov 6 01:12:39.737: INFO: node status heartbeat is unchanged for 5.999205809s, waiting for 1m20s Nov 6 01:12:40.737: INFO: node status heartbeat is unchanged for 6.999616232s, waiting for 1m20s Nov 6 01:12:41.739: INFO: node status heartbeat is unchanged for 8.001230821s, waiting for 1m20s Nov 6 01:12:42.736: INFO: node status heartbeat is unchanged for 8.998942543s, waiting for 1m20s Nov 6 01:12:43.738: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 6 01:12:43.743: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:12:33 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:12:43 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:12:33 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:12:43 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:12:33 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:12:43 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-05 21:01:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 6 01:12:44.737: INFO: node status heartbeat is unchanged for 999.504144ms, waiting for 1m20s Nov 6 01:12:45.737: INFO: node status heartbeat is unchanged for 1.999394051s, waiting for 1m20s Nov 6 01:12:46.738: INFO: node status heartbeat is unchanged for 2.999696635s, waiting for 1m20s Nov 6 01:12:47.738: INFO: node status heartbeat is unchanged for 3.999814155s, waiting for 1m20s Nov 6 01:12:48.738: INFO: node status heartbeat is unchanged for 4.999831051s, waiting for 1m20s Nov 6 01:12:49.737: INFO: node status heartbeat is unchanged for 5.999407773s, waiting for 1m20s Nov 6 01:12:50.737: INFO: node status heartbeat is unchanged for 6.999314519s, waiting for 1m20s Nov 6 01:12:51.737: INFO: node status heartbeat is unchanged for 7.99928314s, waiting for 1m20s Nov 6 01:12:52.739: INFO: node status heartbeat is unchanged for 9.000886372s, waiting for 1m20s Nov 6 01:12:53.736: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 6 01:12:53.741: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:12:43 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:12:53 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:12:43 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:12:53 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:12:43 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:12:53 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-05 21:01:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 6 01:12:54.739: INFO: node status heartbeat is unchanged for 1.00276003s, waiting for 1m20s Nov 6 01:12:55.737: INFO: node status heartbeat is unchanged for 2.00129914s, waiting for 1m20s Nov 6 01:12:56.737: INFO: node status heartbeat is unchanged for 3.000831516s, waiting for 1m20s Nov 6 01:12:57.739: INFO: node status heartbeat is unchanged for 4.002366732s, waiting for 1m20s Nov 6 01:12:58.738: INFO: node status heartbeat is unchanged for 5.001998369s, waiting for 1m20s Nov 6 01:12:59.738: INFO: node status heartbeat is unchanged for 6.001407181s, waiting for 1m20s Nov 6 01:13:00.738: INFO: node status heartbeat is unchanged for 7.001667287s, waiting for 1m20s Nov 6 01:13:01.738: INFO: node status heartbeat is unchanged for 8.00225788s, waiting for 1m20s Nov 6 01:13:02.739: INFO: node status heartbeat is unchanged for 9.002681756s, waiting for 1m20s Nov 6 01:13:03.738: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 6 01:13:03.743: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:12:53 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:13:03 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:12:53 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:13:03 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:12:53 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:13:03 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-05 21:01:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 6 01:13:04.738: INFO: node status heartbeat is unchanged for 999.506555ms, waiting for 1m20s Nov 6 01:13:05.738: INFO: node status heartbeat is unchanged for 1.999769837s, waiting for 1m20s Nov 6 01:13:06.739: INFO: node status heartbeat is unchanged for 3.000872846s, waiting for 1m20s Nov 6 01:13:07.737: INFO: node status heartbeat is unchanged for 3.998544113s, waiting for 1m20s Nov 6 01:13:08.739: INFO: node status heartbeat is unchanged for 5.00093852s, waiting for 1m20s Nov 6 01:13:09.738: INFO: node status heartbeat is unchanged for 5.99959299s, waiting for 1m20s Nov 6 01:13:10.738: INFO: node status heartbeat is unchanged for 7.000141134s, waiting for 1m20s Nov 6 01:13:11.740: INFO: node status heartbeat is unchanged for 8.00156837s, waiting for 1m20s Nov 6 01:13:12.737: INFO: node status heartbeat is unchanged for 8.998556325s, waiting for 1m20s Nov 6 01:13:13.739: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 6 01:13:13.743: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:13:03 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:13:13 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:13:03 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:13:13 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:13:03 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:13:13 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-05 21:01:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 6 01:13:14.739: INFO: node status heartbeat is unchanged for 999.880148ms, waiting for 1m20s Nov 6 01:13:15.737: INFO: node status heartbeat is unchanged for 1.997981191s, waiting for 1m20s Nov 6 01:13:16.739: INFO: node status heartbeat is unchanged for 2.999909129s, waiting for 1m20s Nov 6 01:13:17.738: INFO: node status heartbeat is unchanged for 3.998759701s, waiting for 1m20s Nov 6 01:13:18.738: INFO: node status heartbeat is unchanged for 4.999213508s, waiting for 1m20s Nov 6 01:13:19.738: INFO: node status heartbeat is unchanged for 5.998770912s, waiting for 1m20s Nov 6 01:13:20.739: INFO: node status heartbeat is unchanged for 7.000617216s, waiting for 1m20s Nov 6 01:13:21.738: INFO: node status heartbeat is unchanged for 7.998943658s, waiting for 1m20s Nov 6 01:13:22.737: INFO: node status heartbeat is unchanged for 8.998541713s, waiting for 1m20s Nov 6 01:13:23.737: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 6 01:13:23.742: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:13:13 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:13:23 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:13:13 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:13:23 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:13:13 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:13:23 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-05 21:01:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 6 01:13:24.739: INFO: node status heartbeat is unchanged for 1.00179628s, waiting for 1m20s Nov 6 01:13:25.737: INFO: node status heartbeat is unchanged for 2.000204754s, waiting for 1m20s Nov 6 01:13:26.737: INFO: node status heartbeat is unchanged for 3.000268993s, waiting for 1m20s Nov 6 01:13:27.738: INFO: node status heartbeat is unchanged for 4.000772869s, waiting for 1m20s Nov 6 01:13:28.737: INFO: node status heartbeat is unchanged for 4.999967405s, waiting for 1m20s Nov 6 01:13:29.739: INFO: node status heartbeat is unchanged for 6.001757014s, waiting for 1m20s Nov 6 01:13:30.738: INFO: node status heartbeat is unchanged for 7.00085006s, waiting for 1m20s Nov 6 01:13:31.739: INFO: node status heartbeat is unchanged for 8.001392197s, waiting for 1m20s Nov 6 01:13:32.739: INFO: node status heartbeat is unchanged for 9.00207096s, waiting for 1m20s Nov 6 01:13:33.737: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 6 01:13:33.741: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:04:40 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:13:23 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:13:33 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:13:23 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:13:33 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:13:23 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-11-06 01:13:33 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-05 21:00:39 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-05 21:01:47 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Nov 6 01:13:33.745: INFO: node status heartbeat is unchanged for 8.009551ms, waiting for 1m20s STEP: verify node is still in ready status even though node status report is infrequent [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:13:33.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-2134" for this suite. • [SLOW TEST:300.054 seconds] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when the NodeLease feature is enabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:49 the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently","total":-1,"completed":13,"skipped":1545,"failed":0} Nov 6 01:13:33.762: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:08:45.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:681 Nov 6 01:08:45.289: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:08:47.293: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:08:49.292: INFO: The status of Pod pod-back-off-image is Running (Ready = true) STEP: getting restart delay-0 Nov 6 01:10:43.490: INFO: getRestartDelay: restartCount = 4, finishedAt=2021-11-06 01:09:49 +0000 UTC restartedAt=2021-11-06 01:10:43 +0000 UTC (54s) STEP: getting restart delay-1 Nov 6 01:12:10.865: INFO: getRestartDelay: restartCount = 5, finishedAt=2021-11-06 01:10:48 +0000 UTC restartedAt=2021-11-06 01:12:09 +0000 UTC (1m21s) STEP: getting restart delay-2 Nov 6 01:14:59.577: INFO: getRestartDelay: restartCount = 6, finishedAt=2021-11-06 01:12:14 +0000 UTC restartedAt=2021-11-06 01:14:59 +0000 UTC (2m45s) STEP: updating the image Nov 6 01:15:00.088: INFO: Successfully updated pod "pod-back-off-image" STEP: get restart delay after image update Nov 6 01:15:23.155: INFO: getRestartDelay: restartCount = 8, finishedAt=2021-11-06 01:15:09 +0000 UTC restartedAt=2021-11-06 01:15:22 +0000 UTC (13s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:15:23.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9222" for this suite. • [SLOW TEST:397.913 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:681 ------------------------------ {"msg":"PASSED [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]","total":-1,"completed":6,"skipped":822,"failed":0} Nov 6 01:15:23.168: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:06:57.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:722 Nov 6 01:06:57.613: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:06:59.616: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:07:01.619: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:07:03.617: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:07:05.618: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:07:07.618: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:07:09.616: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:07:11.618: INFO: The status of Pod back-off-cap is Running (Ready = false) Nov 6 01:07:13.617: INFO: The status of Pod back-off-cap is Running (Ready = false) Nov 6 01:07:15.619: INFO: The status of Pod back-off-cap is Running (Ready = false) Nov 6 01:07:17.617: INFO: The status of Pod back-off-cap is Running (Ready = false) Nov 6 01:07:19.617: INFO: The status of Pod back-off-cap is Running (Ready = true) STEP: getting restart delay when capped Nov 6 01:18:45.007: INFO: getRestartDelay: restartCount = 7, finishedAt=2021-11-06 01:13:33 +0000 UTC restartedAt=2021-11-06 01:18:43 +0000 UTC (5m10s) Nov 6 01:23:59.409: INFO: getRestartDelay: restartCount = 8, finishedAt=2021-11-06 01:18:48 +0000 UTC restartedAt=2021-11-06 01:23:58 +0000 UTC (5m10s) Nov 6 01:29:08.746: INFO: getRestartDelay: restartCount = 9, finishedAt=2021-11-06 01:24:03 +0000 UTC restartedAt=2021-11-06 01:29:07 +0000 UTC (5m4s) STEP: getting restart delay after a capped delay Nov 6 01:34:15.069: INFO: getRestartDelay: restartCount = 10, finishedAt=2021-11-06 01:29:12 +0000 UTC restartedAt=2021-11-06 01:34:14 +0000 UTC (5m2s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:34:15.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8259" for this suite. • [SLOW TEST:1637.504 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:722 ------------------------------ {"msg":"PASSED [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]","total":-1,"completed":2,"skipped":160,"failed":0} Nov 6 01:34:15.081: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-node] Probing container should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod]","total":-1,"completed":8,"skipped":1001,"failed":0} Nov 6 01:09:06.571: INFO: Running AfterSuite actions on all nodes Nov 6 01:34:15.120: INFO: Running AfterSuite actions on node 1 Nov 6 01:34:15.120: INFO: Skipping dumping logs from cluster Summarizing 1 Failure: [Panic!] [sig-node] Probing container [It] should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 Ran 53 of 5770 Specs in 1656.281 seconds FAIL! -- 52 Passed | 1 Failed | 0 Pending | 5717 Skipped Ginkgo ran 1 suite in 27m37.822553732s Test Suite Failed